00:00:00.000 Started by upstream project "autotest-per-patch" build number 126205 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.077 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.080 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.125 Fetching changes from the remote Git repository 00:00:00.129 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.178 Using shallow fetch with depth 1 00:00:00.178 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.178 > git --version # timeout=10 00:00:00.206 > git --version # 'git version 2.39.2' 00:00:00.206 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.225 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.225 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.291 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.303 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.348 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:05.348 > git config core.sparsecheckout # timeout=10 00:00:05.374 > git read-tree -mu HEAD # timeout=10 00:00:05.389 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:05.408 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:05.408 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.488 [Pipeline] Start of Pipeline 00:00:05.502 [Pipeline] library 00:00:05.503 Loading library shm_lib@master 00:00:05.503 Library shm_lib@master is cached. Copying from home. 00:00:05.517 [Pipeline] node 00:00:05.525 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.526 [Pipeline] { 00:00:05.533 [Pipeline] catchError 00:00:05.534 [Pipeline] { 00:00:05.542 [Pipeline] wrap 00:00:05.549 [Pipeline] { 00:00:05.554 [Pipeline] stage 00:00:05.555 [Pipeline] { (Prologue) 00:00:05.731 [Pipeline] sh 00:00:06.011 + logger -p user.info -t JENKINS-CI 00:00:06.030 [Pipeline] echo 00:00:06.032 Node: WFP8 00:00:06.039 [Pipeline] sh 00:00:06.331 [Pipeline] setCustomBuildProperty 00:00:06.340 [Pipeline] echo 00:00:06.341 Cleanup processes 00:00:06.345 [Pipeline] sh 00:00:06.621 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.621 3983230 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.634 [Pipeline] sh 00:00:06.916 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.916 ++ grep -v 'sudo pgrep' 00:00:06.916 ++ awk '{print $1}' 00:00:06.916 + sudo kill -9 00:00:06.916 + true 00:00:06.932 [Pipeline] cleanWs 00:00:06.941 [WS-CLEANUP] Deleting project workspace... 00:00:06.941 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.946 [WS-CLEANUP] done 00:00:06.951 [Pipeline] setCustomBuildProperty 00:00:06.962 [Pipeline] sh 00:00:07.239 + sudo git config --global --replace-all safe.directory '*' 00:00:07.321 [Pipeline] httpRequest 00:00:07.349 [Pipeline] echo 00:00:07.350 Sorcerer 10.211.164.101 is alive 00:00:07.355 [Pipeline] httpRequest 00:00:07.358 HttpMethod: GET 00:00:07.359 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.379 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.393 Response Code: HTTP/1.1 200 OK 00:00:07.393 Success: Status code 200 is in the accepted range: 200,404 00:00:07.394 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:15.803 [Pipeline] sh 00:00:16.085 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:16.103 [Pipeline] httpRequest 00:00:16.134 [Pipeline] echo 00:00:16.136 Sorcerer 10.211.164.101 is alive 00:00:16.145 [Pipeline] httpRequest 00:00:16.150 HttpMethod: GET 00:00:16.150 URL: http://10.211.164.101/packages/spdk_44e72e4e7af599194dee1a91eeb2b07a37eefc8b.tar.gz 00:00:16.151 Sending request to url: http://10.211.164.101/packages/spdk_44e72e4e7af599194dee1a91eeb2b07a37eefc8b.tar.gz 00:00:16.153 Response Code: HTTP/1.1 200 OK 00:00:16.153 Success: Status code 200 is in the accepted range: 200,404 00:00:16.154 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_44e72e4e7af599194dee1a91eeb2b07a37eefc8b.tar.gz 00:00:32.673 [Pipeline] sh 00:00:32.995 + tar --no-same-owner -xf spdk_44e72e4e7af599194dee1a91eeb2b07a37eefc8b.tar.gz 00:00:35.540 [Pipeline] sh 00:00:35.822 + git -C spdk log --oneline -n5 00:00:35.822 44e72e4e7 autopackage: Rename autopackage.sh to release_build.sh 00:00:35.822 255871c19 autopackage: Move core of the script to autobuild 00:00:35.822 bd4841ef7 autopackage: Replace SPDK_TEST_RELEASE_BUILD with SPDK_TEST_PACKAGING 00:00:35.822 719d03c6a sock/uring: only register net impl if supported 00:00:35.822 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:35.834 [Pipeline] } 00:00:35.850 [Pipeline] // stage 00:00:35.858 [Pipeline] stage 00:00:35.860 [Pipeline] { (Prepare) 00:00:35.881 [Pipeline] writeFile 00:00:35.897 [Pipeline] sh 00:00:36.178 + logger -p user.info -t JENKINS-CI 00:00:36.191 [Pipeline] sh 00:00:36.473 + logger -p user.info -t JENKINS-CI 00:00:36.485 [Pipeline] sh 00:00:36.765 + cat autorun-spdk.conf 00:00:36.765 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.765 SPDK_TEST_NVMF=1 00:00:36.765 SPDK_TEST_NVME_CLI=1 00:00:36.765 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.765 SPDK_TEST_NVMF_NICS=e810 00:00:36.765 SPDK_TEST_VFIOUSER=1 00:00:36.765 SPDK_RUN_UBSAN=1 00:00:36.765 NET_TYPE=phy 00:00:36.770 RUN_NIGHTLY=0 00:00:36.775 [Pipeline] readFile 00:00:36.796 [Pipeline] withEnv 00:00:36.797 [Pipeline] { 00:00:36.808 [Pipeline] sh 00:00:37.092 + set -ex 00:00:37.092 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:37.092 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:37.092 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.092 ++ SPDK_TEST_NVMF=1 00:00:37.092 ++ SPDK_TEST_NVME_CLI=1 00:00:37.092 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:37.092 ++ SPDK_TEST_NVMF_NICS=e810 00:00:37.092 ++ SPDK_TEST_VFIOUSER=1 00:00:37.092 ++ SPDK_RUN_UBSAN=1 00:00:37.092 ++ NET_TYPE=phy 00:00:37.092 ++ RUN_NIGHTLY=0 00:00:37.092 + case $SPDK_TEST_NVMF_NICS in 00:00:37.092 + DRIVERS=ice 00:00:37.092 + [[ tcp == \r\d\m\a ]] 00:00:37.092 + [[ -n ice ]] 00:00:37.092 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:37.092 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:37.092 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:37.092 rmmod: ERROR: Module irdma is not currently loaded 00:00:37.092 rmmod: ERROR: Module i40iw is not currently loaded 00:00:37.092 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:37.092 + true 00:00:37.092 + for D in $DRIVERS 00:00:37.092 + sudo modprobe ice 00:00:37.092 + exit 0 00:00:37.101 [Pipeline] } 00:00:37.118 [Pipeline] // withEnv 00:00:37.123 [Pipeline] } 00:00:37.139 [Pipeline] // stage 00:00:37.149 [Pipeline] catchError 00:00:37.150 [Pipeline] { 00:00:37.169 [Pipeline] timeout 00:00:37.169 Timeout set to expire in 50 min 00:00:37.171 [Pipeline] { 00:00:37.189 [Pipeline] stage 00:00:37.192 [Pipeline] { (Tests) 00:00:37.211 [Pipeline] sh 00:00:37.494 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:37.495 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:37.495 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:37.495 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:37.495 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:37.495 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:37.495 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:37.495 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:37.495 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:37.495 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:37.495 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:37.495 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:37.495 + source /etc/os-release 00:00:37.495 ++ NAME='Fedora Linux' 00:00:37.495 ++ VERSION='38 (Cloud Edition)' 00:00:37.495 ++ ID=fedora 00:00:37.495 ++ VERSION_ID=38 00:00:37.495 ++ VERSION_CODENAME= 00:00:37.495 ++ PLATFORM_ID=platform:f38 00:00:37.495 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:37.495 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:37.495 ++ LOGO=fedora-logo-icon 00:00:37.495 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:37.495 ++ HOME_URL=https://fedoraproject.org/ 00:00:37.495 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:37.495 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:37.495 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:37.495 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:37.495 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:37.495 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:37.495 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:37.495 ++ SUPPORT_END=2024-05-14 00:00:37.495 ++ VARIANT='Cloud Edition' 00:00:37.495 ++ VARIANT_ID=cloud 00:00:37.495 + uname -a 00:00:37.495 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:37.495 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:40.067 Hugepages 00:00:40.067 node hugesize free / total 00:00:40.067 node0 1048576kB 0 / 0 00:00:40.067 node0 2048kB 0 / 0 00:00:40.067 node1 1048576kB 0 / 0 00:00:40.067 node1 2048kB 0 / 0 00:00:40.067 00:00:40.067 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:40.067 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:40.067 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:40.067 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:40.067 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:40.067 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:40.067 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:40.067 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:40.067 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:40.067 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:40.067 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:40.067 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:40.067 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:40.067 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:40.067 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:40.067 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:40.067 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:40.067 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:40.067 + rm -f /tmp/spdk-ld-path 00:00:40.067 + source autorun-spdk.conf 00:00:40.067 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.067 ++ SPDK_TEST_NVMF=1 00:00:40.067 ++ SPDK_TEST_NVME_CLI=1 00:00:40.067 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.067 ++ SPDK_TEST_NVMF_NICS=e810 00:00:40.067 ++ SPDK_TEST_VFIOUSER=1 00:00:40.067 ++ SPDK_RUN_UBSAN=1 00:00:40.067 ++ NET_TYPE=phy 00:00:40.067 ++ RUN_NIGHTLY=0 00:00:40.067 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:40.067 + [[ -n '' ]] 00:00:40.067 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:40.067 + for M in /var/spdk/build-*-manifest.txt 00:00:40.067 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:40.067 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:40.067 + for M in /var/spdk/build-*-manifest.txt 00:00:40.067 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:40.067 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:40.067 ++ uname 00:00:40.067 + [[ Linux == \L\i\n\u\x ]] 00:00:40.067 + sudo dmesg -T 00:00:40.067 + sudo dmesg --clear 00:00:40.067 + dmesg_pid=3984151 00:00:40.067 + [[ Fedora Linux == FreeBSD ]] 00:00:40.067 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:40.067 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:40.067 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:40.067 + [[ -x /usr/src/fio-static/fio ]] 00:00:40.067 + export FIO_BIN=/usr/src/fio-static/fio 00:00:40.067 + sudo dmesg -Tw 00:00:40.067 + FIO_BIN=/usr/src/fio-static/fio 00:00:40.067 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:40.067 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:40.067 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:40.067 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:40.067 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:40.067 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:40.067 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:40.067 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:40.067 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:40.067 Test configuration: 00:00:40.067 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.067 SPDK_TEST_NVMF=1 00:00:40.067 SPDK_TEST_NVME_CLI=1 00:00:40.067 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.067 SPDK_TEST_NVMF_NICS=e810 00:00:40.067 SPDK_TEST_VFIOUSER=1 00:00:40.067 SPDK_RUN_UBSAN=1 00:00:40.067 NET_TYPE=phy 00:00:40.067 RUN_NIGHTLY=0 16:42:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:40.067 16:42:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:40.067 16:42:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:40.067 16:42:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:40.067 16:42:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:40.067 16:42:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:40.067 16:42:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:40.067 16:42:46 -- paths/export.sh@5 -- $ export PATH 00:00:40.067 16:42:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:40.067 16:42:46 -- common/autobuild_common.sh@472 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:40.067 16:42:46 -- common/autobuild_common.sh@473 -- $ date +%s 00:00:40.067 16:42:46 -- common/autobuild_common.sh@473 -- $ mktemp -dt spdk_1721054566.XXXXXX 00:00:40.067 16:42:46 -- common/autobuild_common.sh@473 -- $ SPDK_WORKSPACE=/tmp/spdk_1721054566.ivIeeI 00:00:40.067 16:42:46 -- common/autobuild_common.sh@475 -- $ [[ -n '' ]] 00:00:40.067 16:42:46 -- common/autobuild_common.sh@479 -- $ '[' -n '' ']' 00:00:40.067 16:42:46 -- common/autobuild_common.sh@482 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:40.067 16:42:46 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:40.067 16:42:46 -- common/autobuild_common.sh@488 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:40.067 16:42:46 -- common/autobuild_common.sh@489 -- $ get_config_params 00:00:40.067 16:42:46 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:40.068 16:42:46 -- common/autotest_common.sh@10 -- $ set +x 00:00:40.068 16:42:46 -- common/autobuild_common.sh@489 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:40.068 16:42:46 -- common/autobuild_common.sh@491 -- $ start_monitor_resources 00:00:40.068 16:42:46 -- pm/common@17 -- $ local monitor 00:00:40.068 16:42:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:40.068 16:42:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:40.068 16:42:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:40.068 16:42:46 -- pm/common@21 -- $ date +%s 00:00:40.068 16:42:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:40.068 16:42:46 -- pm/common@21 -- $ date +%s 00:00:40.068 16:42:46 -- pm/common@25 -- $ sleep 1 00:00:40.068 16:42:46 -- pm/common@21 -- $ date +%s 00:00:40.068 16:42:46 -- pm/common@21 -- $ date +%s 00:00:40.068 16:42:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721054566 00:00:40.068 16:42:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721054566 00:00:40.068 16:42:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721054566 00:00:40.068 16:42:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721054566 00:00:40.068 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721054566_collect-vmstat.pm.log 00:00:40.068 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721054566_collect-cpu-load.pm.log 00:00:40.068 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721054566_collect-cpu-temp.pm.log 00:00:40.068 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721054566_collect-bmc-pm.bmc.pm.log 00:00:41.005 16:42:47 -- common/autobuild_common.sh@492 -- $ trap stop_monitor_resources EXIT 00:00:41.005 16:42:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:41.005 16:42:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:41.005 16:42:47 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.005 16:42:47 -- spdk/autobuild.sh@16 -- $ date -u 00:00:41.005 Mon Jul 15 02:42:47 PM UTC 2024 00:00:41.005 16:42:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:41.264 v24.09-pre-205-g44e72e4e7 00:00:41.264 16:42:47 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:41.264 16:42:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:41.264 16:42:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:41.264 16:42:47 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:41.264 16:42:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:41.264 16:42:47 -- common/autotest_common.sh@10 -- $ set +x 00:00:41.264 ************************************ 00:00:41.264 START TEST ubsan 00:00:41.264 ************************************ 00:00:41.264 16:42:47 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:41.264 using ubsan 00:00:41.264 00:00:41.264 real 0m0.000s 00:00:41.264 user 0m0.000s 00:00:41.264 sys 0m0.000s 00:00:41.264 16:42:47 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:41.264 16:42:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:41.264 ************************************ 00:00:41.264 END TEST ubsan 00:00:41.264 ************************************ 00:00:41.264 16:42:47 -- common/autotest_common.sh@1142 -- $ return 0 00:00:41.264 16:42:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:41.264 16:42:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:41.264 16:42:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:41.264 16:42:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:41.264 16:42:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:41.264 16:42:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:41.264 16:42:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:41.264 16:42:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:41.264 16:42:47 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:41.264 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:41.264 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:41.832 Using 'verbs' RDMA provider 00:00:54.622 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:06.827 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:06.827 Creating mk/config.mk...done. 00:01:06.827 Creating mk/cc.flags.mk...done. 00:01:06.827 Type 'make' to build. 00:01:06.827 16:43:11 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:06.827 16:43:11 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:06.827 16:43:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:06.827 16:43:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.827 ************************************ 00:01:06.827 START TEST make 00:01:06.827 ************************************ 00:01:06.827 16:43:11 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:06.827 make[1]: Nothing to be done for 'all'. 00:01:06.827 The Meson build system 00:01:06.827 Version: 1.3.1 00:01:06.827 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:06.827 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:06.827 Build type: native build 00:01:06.827 Project name: libvfio-user 00:01:06.827 Project version: 0.0.1 00:01:06.827 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:06.827 C linker for the host machine: cc ld.bfd 2.39-16 00:01:06.827 Host machine cpu family: x86_64 00:01:06.827 Host machine cpu: x86_64 00:01:06.827 Run-time dependency threads found: YES 00:01:06.827 Library dl found: YES 00:01:06.827 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:06.827 Run-time dependency json-c found: YES 0.17 00:01:06.827 Run-time dependency cmocka found: YES 1.1.7 00:01:06.827 Program pytest-3 found: NO 00:01:06.827 Program flake8 found: NO 00:01:06.827 Program misspell-fixer found: NO 00:01:06.827 Program restructuredtext-lint found: NO 00:01:06.827 Program valgrind found: YES (/usr/bin/valgrind) 00:01:06.827 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.827 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.827 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.827 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:06.827 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:06.827 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:06.827 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:06.827 Build targets in project: 8 00:01:06.827 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:06.827 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:06.827 00:01:06.827 libvfio-user 0.0.1 00:01:06.827 00:01:06.827 User defined options 00:01:06.827 buildtype : debug 00:01:06.827 default_library: shared 00:01:06.827 libdir : /usr/local/lib 00:01:06.827 00:01:06.827 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:07.084 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:07.341 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:07.341 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:07.341 [3/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:07.341 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:07.341 [5/37] Compiling C object samples/null.p/null.c.o 00:01:07.341 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:07.341 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:07.341 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:07.341 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:07.341 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:07.341 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:07.341 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:07.341 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:07.341 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:07.341 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:07.341 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:07.341 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:07.341 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:07.341 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:07.341 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:07.341 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:07.341 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:07.341 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:07.341 [24/37] Compiling C object samples/server.p/server.c.o 00:01:07.341 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:07.341 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:07.341 [27/37] Compiling C object samples/client.p/client.c.o 00:01:07.341 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:01:07.341 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:07.341 [30/37] Linking target samples/client 00:01:07.341 [31/37] Linking target test/unit_tests 00:01:07.598 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:07.598 [33/37] Linking target samples/server 00:01:07.598 [34/37] Linking target samples/null 00:01:07.598 [35/37] Linking target samples/gpio-pci-idio-16 00:01:07.598 [36/37] Linking target samples/lspci 00:01:07.598 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:07.598 INFO: autodetecting backend as ninja 00:01:07.598 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:07.598 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:07.855 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:07.855 ninja: no work to do. 00:01:13.117 The Meson build system 00:01:13.117 Version: 1.3.1 00:01:13.117 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:13.117 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:13.117 Build type: native build 00:01:13.117 Program cat found: YES (/usr/bin/cat) 00:01:13.117 Project name: DPDK 00:01:13.117 Project version: 24.03.0 00:01:13.117 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:13.117 C linker for the host machine: cc ld.bfd 2.39-16 00:01:13.117 Host machine cpu family: x86_64 00:01:13.117 Host machine cpu: x86_64 00:01:13.117 Message: ## Building in Developer Mode ## 00:01:13.117 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:13.117 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:13.117 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:13.117 Program python3 found: YES (/usr/bin/python3) 00:01:13.117 Program cat found: YES (/usr/bin/cat) 00:01:13.117 Compiler for C supports arguments -march=native: YES 00:01:13.117 Checking for size of "void *" : 8 00:01:13.117 Checking for size of "void *" : 8 (cached) 00:01:13.117 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:13.117 Library m found: YES 00:01:13.117 Library numa found: YES 00:01:13.117 Has header "numaif.h" : YES 00:01:13.117 Library fdt found: NO 00:01:13.117 Library execinfo found: NO 00:01:13.117 Has header "execinfo.h" : YES 00:01:13.117 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:13.117 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:13.117 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:13.117 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:13.117 Run-time dependency openssl found: YES 3.0.9 00:01:13.117 Run-time dependency libpcap found: YES 1.10.4 00:01:13.117 Has header "pcap.h" with dependency libpcap: YES 00:01:13.117 Compiler for C supports arguments -Wcast-qual: YES 00:01:13.117 Compiler for C supports arguments -Wdeprecated: YES 00:01:13.117 Compiler for C supports arguments -Wformat: YES 00:01:13.117 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:13.117 Compiler for C supports arguments -Wformat-security: NO 00:01:13.117 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:13.117 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:13.117 Compiler for C supports arguments -Wnested-externs: YES 00:01:13.117 Compiler for C supports arguments -Wold-style-definition: YES 00:01:13.117 Compiler for C supports arguments -Wpointer-arith: YES 00:01:13.117 Compiler for C supports arguments -Wsign-compare: YES 00:01:13.117 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:13.117 Compiler for C supports arguments -Wundef: YES 00:01:13.117 Compiler for C supports arguments -Wwrite-strings: YES 00:01:13.117 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:13.117 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:13.117 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:13.117 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:13.117 Program objdump found: YES (/usr/bin/objdump) 00:01:13.117 Compiler for C supports arguments -mavx512f: YES 00:01:13.117 Checking if "AVX512 checking" compiles: YES 00:01:13.117 Fetching value of define "__SSE4_2__" : 1 00:01:13.117 Fetching value of define "__AES__" : 1 00:01:13.117 Fetching value of define "__AVX__" : 1 00:01:13.117 Fetching value of define "__AVX2__" : 1 00:01:13.117 Fetching value of define "__AVX512BW__" : 1 00:01:13.117 Fetching value of define "__AVX512CD__" : 1 00:01:13.117 Fetching value of define "__AVX512DQ__" : 1 00:01:13.117 Fetching value of define "__AVX512F__" : 1 00:01:13.117 Fetching value of define "__AVX512VL__" : 1 00:01:13.117 Fetching value of define "__PCLMUL__" : 1 00:01:13.117 Fetching value of define "__RDRND__" : 1 00:01:13.117 Fetching value of define "__RDSEED__" : 1 00:01:13.117 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:13.117 Fetching value of define "__znver1__" : (undefined) 00:01:13.117 Fetching value of define "__znver2__" : (undefined) 00:01:13.117 Fetching value of define "__znver3__" : (undefined) 00:01:13.117 Fetching value of define "__znver4__" : (undefined) 00:01:13.117 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:13.117 Message: lib/log: Defining dependency "log" 00:01:13.117 Message: lib/kvargs: Defining dependency "kvargs" 00:01:13.117 Message: lib/telemetry: Defining dependency "telemetry" 00:01:13.117 Checking for function "getentropy" : NO 00:01:13.117 Message: lib/eal: Defining dependency "eal" 00:01:13.117 Message: lib/ring: Defining dependency "ring" 00:01:13.117 Message: lib/rcu: Defining dependency "rcu" 00:01:13.117 Message: lib/mempool: Defining dependency "mempool" 00:01:13.117 Message: lib/mbuf: Defining dependency "mbuf" 00:01:13.117 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:13.117 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:13.117 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:13.117 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:13.117 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:13.117 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:13.117 Compiler for C supports arguments -mpclmul: YES 00:01:13.117 Compiler for C supports arguments -maes: YES 00:01:13.117 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:13.117 Compiler for C supports arguments -mavx512bw: YES 00:01:13.117 Compiler for C supports arguments -mavx512dq: YES 00:01:13.117 Compiler for C supports arguments -mavx512vl: YES 00:01:13.117 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:13.117 Compiler for C supports arguments -mavx2: YES 00:01:13.117 Compiler for C supports arguments -mavx: YES 00:01:13.117 Message: lib/net: Defining dependency "net" 00:01:13.117 Message: lib/meter: Defining dependency "meter" 00:01:13.117 Message: lib/ethdev: Defining dependency "ethdev" 00:01:13.117 Message: lib/pci: Defining dependency "pci" 00:01:13.117 Message: lib/cmdline: Defining dependency "cmdline" 00:01:13.117 Message: lib/hash: Defining dependency "hash" 00:01:13.117 Message: lib/timer: Defining dependency "timer" 00:01:13.117 Message: lib/compressdev: Defining dependency "compressdev" 00:01:13.117 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:13.117 Message: lib/dmadev: Defining dependency "dmadev" 00:01:13.117 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:13.117 Message: lib/power: Defining dependency "power" 00:01:13.117 Message: lib/reorder: Defining dependency "reorder" 00:01:13.117 Message: lib/security: Defining dependency "security" 00:01:13.117 Has header "linux/userfaultfd.h" : YES 00:01:13.117 Has header "linux/vduse.h" : YES 00:01:13.117 Message: lib/vhost: Defining dependency "vhost" 00:01:13.117 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:13.117 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:13.117 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:13.117 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:13.117 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:13.117 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:13.117 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:13.117 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:13.117 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:13.117 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:13.117 Program doxygen found: YES (/usr/bin/doxygen) 00:01:13.117 Configuring doxy-api-html.conf using configuration 00:01:13.117 Configuring doxy-api-man.conf using configuration 00:01:13.117 Program mandb found: YES (/usr/bin/mandb) 00:01:13.117 Program sphinx-build found: NO 00:01:13.117 Configuring rte_build_config.h using configuration 00:01:13.117 Message: 00:01:13.117 ================= 00:01:13.117 Applications Enabled 00:01:13.117 ================= 00:01:13.117 00:01:13.117 apps: 00:01:13.117 00:01:13.117 00:01:13.117 Message: 00:01:13.117 ================= 00:01:13.117 Libraries Enabled 00:01:13.117 ================= 00:01:13.117 00:01:13.117 libs: 00:01:13.117 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:13.117 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:13.117 cryptodev, dmadev, power, reorder, security, vhost, 00:01:13.117 00:01:13.117 Message: 00:01:13.117 =============== 00:01:13.117 Drivers Enabled 00:01:13.117 =============== 00:01:13.117 00:01:13.117 common: 00:01:13.117 00:01:13.117 bus: 00:01:13.117 pci, vdev, 00:01:13.117 mempool: 00:01:13.117 ring, 00:01:13.117 dma: 00:01:13.117 00:01:13.117 net: 00:01:13.117 00:01:13.117 crypto: 00:01:13.117 00:01:13.117 compress: 00:01:13.117 00:01:13.117 vdpa: 00:01:13.117 00:01:13.117 00:01:13.117 Message: 00:01:13.117 ================= 00:01:13.117 Content Skipped 00:01:13.117 ================= 00:01:13.117 00:01:13.117 apps: 00:01:13.117 dumpcap: explicitly disabled via build config 00:01:13.117 graph: explicitly disabled via build config 00:01:13.117 pdump: explicitly disabled via build config 00:01:13.117 proc-info: explicitly disabled via build config 00:01:13.117 test-acl: explicitly disabled via build config 00:01:13.117 test-bbdev: explicitly disabled via build config 00:01:13.117 test-cmdline: explicitly disabled via build config 00:01:13.117 test-compress-perf: explicitly disabled via build config 00:01:13.117 test-crypto-perf: explicitly disabled via build config 00:01:13.118 test-dma-perf: explicitly disabled via build config 00:01:13.118 test-eventdev: explicitly disabled via build config 00:01:13.118 test-fib: explicitly disabled via build config 00:01:13.118 test-flow-perf: explicitly disabled via build config 00:01:13.118 test-gpudev: explicitly disabled via build config 00:01:13.118 test-mldev: explicitly disabled via build config 00:01:13.118 test-pipeline: explicitly disabled via build config 00:01:13.118 test-pmd: explicitly disabled via build config 00:01:13.118 test-regex: explicitly disabled via build config 00:01:13.118 test-sad: explicitly disabled via build config 00:01:13.118 test-security-perf: explicitly disabled via build config 00:01:13.118 00:01:13.118 libs: 00:01:13.118 argparse: explicitly disabled via build config 00:01:13.118 metrics: explicitly disabled via build config 00:01:13.118 acl: explicitly disabled via build config 00:01:13.118 bbdev: explicitly disabled via build config 00:01:13.118 bitratestats: explicitly disabled via build config 00:01:13.118 bpf: explicitly disabled via build config 00:01:13.118 cfgfile: explicitly disabled via build config 00:01:13.118 distributor: explicitly disabled via build config 00:01:13.118 efd: explicitly disabled via build config 00:01:13.118 eventdev: explicitly disabled via build config 00:01:13.118 dispatcher: explicitly disabled via build config 00:01:13.118 gpudev: explicitly disabled via build config 00:01:13.118 gro: explicitly disabled via build config 00:01:13.118 gso: explicitly disabled via build config 00:01:13.118 ip_frag: explicitly disabled via build config 00:01:13.118 jobstats: explicitly disabled via build config 00:01:13.118 latencystats: explicitly disabled via build config 00:01:13.118 lpm: explicitly disabled via build config 00:01:13.118 member: explicitly disabled via build config 00:01:13.118 pcapng: explicitly disabled via build config 00:01:13.118 rawdev: explicitly disabled via build config 00:01:13.118 regexdev: explicitly disabled via build config 00:01:13.118 mldev: explicitly disabled via build config 00:01:13.118 rib: explicitly disabled via build config 00:01:13.118 sched: explicitly disabled via build config 00:01:13.118 stack: explicitly disabled via build config 00:01:13.118 ipsec: explicitly disabled via build config 00:01:13.118 pdcp: explicitly disabled via build config 00:01:13.118 fib: explicitly disabled via build config 00:01:13.118 port: explicitly disabled via build config 00:01:13.118 pdump: explicitly disabled via build config 00:01:13.118 table: explicitly disabled via build config 00:01:13.118 pipeline: explicitly disabled via build config 00:01:13.118 graph: explicitly disabled via build config 00:01:13.118 node: explicitly disabled via build config 00:01:13.118 00:01:13.118 drivers: 00:01:13.118 common/cpt: not in enabled drivers build config 00:01:13.118 common/dpaax: not in enabled drivers build config 00:01:13.118 common/iavf: not in enabled drivers build config 00:01:13.118 common/idpf: not in enabled drivers build config 00:01:13.118 common/ionic: not in enabled drivers build config 00:01:13.118 common/mvep: not in enabled drivers build config 00:01:13.118 common/octeontx: not in enabled drivers build config 00:01:13.118 bus/auxiliary: not in enabled drivers build config 00:01:13.118 bus/cdx: not in enabled drivers build config 00:01:13.118 bus/dpaa: not in enabled drivers build config 00:01:13.118 bus/fslmc: not in enabled drivers build config 00:01:13.118 bus/ifpga: not in enabled drivers build config 00:01:13.118 bus/platform: not in enabled drivers build config 00:01:13.118 bus/uacce: not in enabled drivers build config 00:01:13.118 bus/vmbus: not in enabled drivers build config 00:01:13.118 common/cnxk: not in enabled drivers build config 00:01:13.118 common/mlx5: not in enabled drivers build config 00:01:13.118 common/nfp: not in enabled drivers build config 00:01:13.118 common/nitrox: not in enabled drivers build config 00:01:13.118 common/qat: not in enabled drivers build config 00:01:13.118 common/sfc_efx: not in enabled drivers build config 00:01:13.118 mempool/bucket: not in enabled drivers build config 00:01:13.118 mempool/cnxk: not in enabled drivers build config 00:01:13.118 mempool/dpaa: not in enabled drivers build config 00:01:13.118 mempool/dpaa2: not in enabled drivers build config 00:01:13.118 mempool/octeontx: not in enabled drivers build config 00:01:13.118 mempool/stack: not in enabled drivers build config 00:01:13.118 dma/cnxk: not in enabled drivers build config 00:01:13.118 dma/dpaa: not in enabled drivers build config 00:01:13.118 dma/dpaa2: not in enabled drivers build config 00:01:13.118 dma/hisilicon: not in enabled drivers build config 00:01:13.118 dma/idxd: not in enabled drivers build config 00:01:13.118 dma/ioat: not in enabled drivers build config 00:01:13.118 dma/skeleton: not in enabled drivers build config 00:01:13.118 net/af_packet: not in enabled drivers build config 00:01:13.118 net/af_xdp: not in enabled drivers build config 00:01:13.118 net/ark: not in enabled drivers build config 00:01:13.118 net/atlantic: not in enabled drivers build config 00:01:13.118 net/avp: not in enabled drivers build config 00:01:13.118 net/axgbe: not in enabled drivers build config 00:01:13.118 net/bnx2x: not in enabled drivers build config 00:01:13.118 net/bnxt: not in enabled drivers build config 00:01:13.118 net/bonding: not in enabled drivers build config 00:01:13.118 net/cnxk: not in enabled drivers build config 00:01:13.118 net/cpfl: not in enabled drivers build config 00:01:13.118 net/cxgbe: not in enabled drivers build config 00:01:13.118 net/dpaa: not in enabled drivers build config 00:01:13.118 net/dpaa2: not in enabled drivers build config 00:01:13.118 net/e1000: not in enabled drivers build config 00:01:13.118 net/ena: not in enabled drivers build config 00:01:13.118 net/enetc: not in enabled drivers build config 00:01:13.118 net/enetfec: not in enabled drivers build config 00:01:13.118 net/enic: not in enabled drivers build config 00:01:13.118 net/failsafe: not in enabled drivers build config 00:01:13.118 net/fm10k: not in enabled drivers build config 00:01:13.118 net/gve: not in enabled drivers build config 00:01:13.118 net/hinic: not in enabled drivers build config 00:01:13.118 net/hns3: not in enabled drivers build config 00:01:13.118 net/i40e: not in enabled drivers build config 00:01:13.118 net/iavf: not in enabled drivers build config 00:01:13.118 net/ice: not in enabled drivers build config 00:01:13.118 net/idpf: not in enabled drivers build config 00:01:13.118 net/igc: not in enabled drivers build config 00:01:13.118 net/ionic: not in enabled drivers build config 00:01:13.118 net/ipn3ke: not in enabled drivers build config 00:01:13.118 net/ixgbe: not in enabled drivers build config 00:01:13.118 net/mana: not in enabled drivers build config 00:01:13.118 net/memif: not in enabled drivers build config 00:01:13.118 net/mlx4: not in enabled drivers build config 00:01:13.118 net/mlx5: not in enabled drivers build config 00:01:13.118 net/mvneta: not in enabled drivers build config 00:01:13.118 net/mvpp2: not in enabled drivers build config 00:01:13.118 net/netvsc: not in enabled drivers build config 00:01:13.118 net/nfb: not in enabled drivers build config 00:01:13.118 net/nfp: not in enabled drivers build config 00:01:13.118 net/ngbe: not in enabled drivers build config 00:01:13.118 net/null: not in enabled drivers build config 00:01:13.118 net/octeontx: not in enabled drivers build config 00:01:13.118 net/octeon_ep: not in enabled drivers build config 00:01:13.118 net/pcap: not in enabled drivers build config 00:01:13.118 net/pfe: not in enabled drivers build config 00:01:13.118 net/qede: not in enabled drivers build config 00:01:13.118 net/ring: not in enabled drivers build config 00:01:13.118 net/sfc: not in enabled drivers build config 00:01:13.118 net/softnic: not in enabled drivers build config 00:01:13.118 net/tap: not in enabled drivers build config 00:01:13.118 net/thunderx: not in enabled drivers build config 00:01:13.118 net/txgbe: not in enabled drivers build config 00:01:13.118 net/vdev_netvsc: not in enabled drivers build config 00:01:13.118 net/vhost: not in enabled drivers build config 00:01:13.118 net/virtio: not in enabled drivers build config 00:01:13.118 net/vmxnet3: not in enabled drivers build config 00:01:13.118 raw/*: missing internal dependency, "rawdev" 00:01:13.118 crypto/armv8: not in enabled drivers build config 00:01:13.118 crypto/bcmfs: not in enabled drivers build config 00:01:13.118 crypto/caam_jr: not in enabled drivers build config 00:01:13.118 crypto/ccp: not in enabled drivers build config 00:01:13.118 crypto/cnxk: not in enabled drivers build config 00:01:13.118 crypto/dpaa_sec: not in enabled drivers build config 00:01:13.118 crypto/dpaa2_sec: not in enabled drivers build config 00:01:13.118 crypto/ipsec_mb: not in enabled drivers build config 00:01:13.118 crypto/mlx5: not in enabled drivers build config 00:01:13.118 crypto/mvsam: not in enabled drivers build config 00:01:13.118 crypto/nitrox: not in enabled drivers build config 00:01:13.118 crypto/null: not in enabled drivers build config 00:01:13.118 crypto/octeontx: not in enabled drivers build config 00:01:13.118 crypto/openssl: not in enabled drivers build config 00:01:13.118 crypto/scheduler: not in enabled drivers build config 00:01:13.118 crypto/uadk: not in enabled drivers build config 00:01:13.118 crypto/virtio: not in enabled drivers build config 00:01:13.118 compress/isal: not in enabled drivers build config 00:01:13.118 compress/mlx5: not in enabled drivers build config 00:01:13.118 compress/nitrox: not in enabled drivers build config 00:01:13.118 compress/octeontx: not in enabled drivers build config 00:01:13.118 compress/zlib: not in enabled drivers build config 00:01:13.118 regex/*: missing internal dependency, "regexdev" 00:01:13.118 ml/*: missing internal dependency, "mldev" 00:01:13.118 vdpa/ifc: not in enabled drivers build config 00:01:13.118 vdpa/mlx5: not in enabled drivers build config 00:01:13.118 vdpa/nfp: not in enabled drivers build config 00:01:13.118 vdpa/sfc: not in enabled drivers build config 00:01:13.118 event/*: missing internal dependency, "eventdev" 00:01:13.118 baseband/*: missing internal dependency, "bbdev" 00:01:13.118 gpu/*: missing internal dependency, "gpudev" 00:01:13.118 00:01:13.118 00:01:13.118 Build targets in project: 85 00:01:13.118 00:01:13.118 DPDK 24.03.0 00:01:13.118 00:01:13.118 User defined options 00:01:13.118 buildtype : debug 00:01:13.118 default_library : shared 00:01:13.118 libdir : lib 00:01:13.118 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:13.118 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:13.118 c_link_args : 00:01:13.118 cpu_instruction_set: native 00:01:13.118 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:13.118 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:13.118 enable_docs : false 00:01:13.118 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:13.118 enable_kmods : false 00:01:13.118 max_lcores : 128 00:01:13.118 tests : false 00:01:13.118 00:01:13.119 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:13.390 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:13.391 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:13.391 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:13.391 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:13.391 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:13.391 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:13.653 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:13.653 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:13.653 [8/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:13.653 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:13.653 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:13.653 [11/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:13.653 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:13.653 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:13.653 [14/268] Linking static target lib/librte_kvargs.a 00:01:13.653 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:13.653 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:13.653 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:13.653 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:13.653 [19/268] Linking static target lib/librte_log.a 00:01:13.653 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:13.653 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:13.653 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:13.653 [23/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:13.912 [24/268] Linking static target lib/librte_pci.a 00:01:13.912 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:13.912 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:13.912 [27/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:13.912 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:13.912 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:13.912 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:13.912 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:13.912 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:13.912 [33/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:13.912 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:13.912 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:13.912 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:13.912 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:13.912 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:13.912 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:13.912 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:13.912 [41/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:13.912 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:13.912 [43/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:13.912 [44/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:13.912 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:13.912 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:13.912 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:13.912 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:13.912 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:13.912 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:13.912 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:13.912 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:13.912 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:13.912 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:13.912 [55/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:13.912 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:13.912 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:13.912 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:13.912 [59/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:13.912 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:13.912 [61/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:13.912 [62/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:13.912 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:13.912 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:13.912 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:13.912 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:13.912 [67/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:13.912 [68/268] Linking static target lib/librte_meter.a 00:01:14.177 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:14.177 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:14.177 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:14.177 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:14.177 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:14.177 [74/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:14.177 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:14.177 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:14.177 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:14.177 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:14.177 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:14.177 [80/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:14.177 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:14.177 [82/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:14.177 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:14.177 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:14.177 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:14.177 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:14.178 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:14.178 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:14.178 [89/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:14.178 [90/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:14.178 [91/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.178 [92/268] Linking static target lib/librte_ring.a 00:01:14.178 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:14.178 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:14.178 [95/268] Linking static target lib/librte_telemetry.a 00:01:14.178 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:14.178 [97/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:14.178 [98/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:14.178 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:14.178 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:14.178 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:14.178 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:14.178 [103/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:14.178 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:14.178 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:14.178 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:14.178 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:14.178 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:14.178 [109/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:14.178 [110/268] Linking static target lib/librte_net.a 00:01:14.178 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:14.178 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:14.178 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:14.178 [114/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:14.178 [115/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:14.178 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:14.178 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:14.178 [118/268] Linking static target lib/librte_rcu.a 00:01:14.178 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:14.178 [120/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.178 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:14.178 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:14.178 [123/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:14.178 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:14.178 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:14.178 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:14.178 [127/268] Linking static target lib/librte_mempool.a 00:01:14.178 [128/268] Linking static target lib/librte_eal.a 00:01:14.178 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:14.178 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:14.178 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:14.178 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:14.178 [133/268] Linking static target lib/librte_cmdline.a 00:01:14.178 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.178 [135/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.436 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:14.436 [137/268] Linking target lib/librte_log.so.24.1 00:01:14.436 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:14.436 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.436 [140/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:14.436 [141/268] Linking static target lib/librte_mbuf.a 00:01:14.436 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:14.436 [143/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.436 [144/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:14.436 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:14.436 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:14.436 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:14.437 [148/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:14.437 [149/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.437 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:14.437 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:14.437 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:14.437 [153/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:14.437 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:14.437 [155/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:14.437 [156/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:14.437 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:14.437 [158/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.437 [159/268] Linking static target lib/librte_timer.a 00:01:14.437 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:14.437 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:14.437 [162/268] Linking target lib/librte_kvargs.so.24.1 00:01:14.437 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:14.437 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:14.437 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:14.437 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:14.437 [167/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:14.437 [168/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:14.437 [169/268] Linking target lib/librte_telemetry.so.24.1 00:01:14.437 [170/268] Linking static target lib/librte_compressdev.a 00:01:14.437 [171/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:14.437 [172/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:14.437 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:14.437 [174/268] Linking static target lib/librte_reorder.a 00:01:14.437 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:14.437 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:14.437 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:14.695 [178/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:14.695 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:14.695 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:14.695 [181/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:14.695 [182/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:14.695 [183/268] Linking static target lib/librte_dmadev.a 00:01:14.695 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:14.695 [185/268] Linking static target lib/librte_power.a 00:01:14.695 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:14.695 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:14.695 [188/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:14.695 [189/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:14.695 [190/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:14.695 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:14.695 [192/268] Linking static target lib/librte_security.a 00:01:14.695 [193/268] Linking static target lib/librte_hash.a 00:01:14.695 [194/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:14.695 [195/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:14.695 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:14.695 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:14.695 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:14.695 [199/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:14.695 [200/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:14.695 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:14.695 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:14.695 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:14.695 [204/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:14.695 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:14.952 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:14.952 [207/268] Linking static target drivers/librte_bus_pci.a 00:01:14.952 [208/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:14.952 [209/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:14.952 [210/268] Linking static target drivers/librte_mempool_ring.a 00:01:14.953 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.953 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.953 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:14.953 [214/268] Linking static target lib/librte_cryptodev.a 00:01:14.953 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.953 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.953 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:15.210 [218/268] Linking static target lib/librte_ethdev.a 00:01:15.210 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.210 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.210 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.210 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.210 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.468 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.468 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:15.468 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.468 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.401 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:16.401 [229/268] Linking static target lib/librte_vhost.a 00:01:16.658 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.029 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.294 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.858 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.858 [234/268] Linking target lib/librte_eal.so.24.1 00:01:24.115 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:24.115 [236/268] Linking target lib/librte_ring.so.24.1 00:01:24.116 [237/268] Linking target lib/librte_meter.so.24.1 00:01:24.116 [238/268] Linking target lib/librte_timer.so.24.1 00:01:24.116 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:24.116 [240/268] Linking target lib/librte_pci.so.24.1 00:01:24.116 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:24.116 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:24.116 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:24.116 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:24.116 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:24.116 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:24.372 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:24.372 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:24.372 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:24.372 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:24.372 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:24.372 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:24.372 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:24.629 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:24.629 [255/268] Linking target lib/librte_net.so.24.1 00:01:24.629 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:01:24.629 [257/268] Linking target lib/librte_reorder.so.24.1 00:01:24.629 [258/268] Linking target lib/librte_compressdev.so.24.1 00:01:24.887 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:24.887 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:24.887 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:24.887 [262/268] Linking target lib/librte_hash.so.24.1 00:01:24.887 [263/268] Linking target lib/librte_security.so.24.1 00:01:24.887 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:24.887 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:24.887 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:25.144 [267/268] Linking target lib/librte_power.so.24.1 00:01:25.144 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:25.144 INFO: autodetecting backend as ninja 00:01:25.144 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:26.072 CC lib/log/log.o 00:01:26.072 CC lib/log/log_deprecated.o 00:01:26.072 CC lib/log/log_flags.o 00:01:26.072 CC lib/ut_mock/mock.o 00:01:26.072 CC lib/ut/ut.o 00:01:26.072 LIB libspdk_ut_mock.a 00:01:26.072 LIB libspdk_log.a 00:01:26.072 LIB libspdk_ut.a 00:01:26.072 SO libspdk_ut_mock.so.6.0 00:01:26.072 SO libspdk_log.so.7.0 00:01:26.329 SO libspdk_ut.so.2.0 00:01:26.329 SYMLINK libspdk_ut_mock.so 00:01:26.329 SYMLINK libspdk_log.so 00:01:26.329 SYMLINK libspdk_ut.so 00:01:26.586 CC lib/util/bit_array.o 00:01:26.586 CC lib/util/cpuset.o 00:01:26.586 CC lib/util/base64.o 00:01:26.586 CC lib/util/crc16.o 00:01:26.586 CC lib/util/crc32.o 00:01:26.586 CC lib/util/crc32c.o 00:01:26.586 CC lib/util/crc32_ieee.o 00:01:26.586 CC lib/util/crc64.o 00:01:26.586 CC lib/util/dif.o 00:01:26.586 CC lib/util/fd.o 00:01:26.586 CC lib/util/hexlify.o 00:01:26.586 CC lib/util/iov.o 00:01:26.586 CC lib/util/file.o 00:01:26.586 CC lib/util/math.o 00:01:26.586 CC lib/util/pipe.o 00:01:26.586 CC lib/util/strerror_tls.o 00:01:26.586 CC lib/util/string.o 00:01:26.586 CC lib/util/uuid.o 00:01:26.586 CC lib/util/fd_group.o 00:01:26.586 CC lib/util/xor.o 00:01:26.586 CC lib/util/zipf.o 00:01:26.586 CC lib/ioat/ioat.o 00:01:26.586 CXX lib/trace_parser/trace.o 00:01:26.586 CC lib/dma/dma.o 00:01:26.586 CC lib/vfio_user/host/vfio_user.o 00:01:26.586 CC lib/vfio_user/host/vfio_user_pci.o 00:01:26.586 LIB libspdk_dma.a 00:01:26.841 SO libspdk_dma.so.4.0 00:01:26.841 LIB libspdk_ioat.a 00:01:26.841 SYMLINK libspdk_dma.so 00:01:26.841 SO libspdk_ioat.so.7.0 00:01:26.841 SYMLINK libspdk_ioat.so 00:01:26.841 LIB libspdk_vfio_user.a 00:01:26.841 SO libspdk_vfio_user.so.5.0 00:01:26.841 LIB libspdk_util.a 00:01:27.099 SYMLINK libspdk_vfio_user.so 00:01:27.099 SO libspdk_util.so.9.1 00:01:27.099 SYMLINK libspdk_util.so 00:01:27.099 LIB libspdk_trace_parser.a 00:01:27.356 SO libspdk_trace_parser.so.5.0 00:01:27.356 SYMLINK libspdk_trace_parser.so 00:01:27.356 CC lib/env_dpdk/memory.o 00:01:27.356 CC lib/env_dpdk/env.o 00:01:27.356 CC lib/env_dpdk/pci.o 00:01:27.356 CC lib/env_dpdk/init.o 00:01:27.356 CC lib/env_dpdk/threads.o 00:01:27.356 CC lib/json/json_parse.o 00:01:27.356 CC lib/env_dpdk/pci_ioat.o 00:01:27.356 CC lib/json/json_util.o 00:01:27.356 CC lib/json/json_write.o 00:01:27.356 CC lib/env_dpdk/pci_vmd.o 00:01:27.356 CC lib/env_dpdk/pci_virtio.o 00:01:27.356 CC lib/env_dpdk/pci_idxd.o 00:01:27.356 CC lib/env_dpdk/pci_event.o 00:01:27.356 CC lib/env_dpdk/sigbus_handler.o 00:01:27.356 CC lib/rdma_utils/rdma_utils.o 00:01:27.356 CC lib/env_dpdk/pci_dpdk.o 00:01:27.356 CC lib/rdma_provider/common.o 00:01:27.356 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:27.356 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:27.356 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:27.356 CC lib/idxd/idxd.o 00:01:27.356 CC lib/conf/conf.o 00:01:27.356 CC lib/idxd/idxd_user.o 00:01:27.356 CC lib/idxd/idxd_kernel.o 00:01:27.356 CC lib/vmd/vmd.o 00:01:27.356 CC lib/vmd/led.o 00:01:27.614 LIB libspdk_rdma_provider.a 00:01:27.614 SO libspdk_rdma_provider.so.6.0 00:01:27.614 LIB libspdk_conf.a 00:01:27.614 LIB libspdk_rdma_utils.a 00:01:27.614 SO libspdk_conf.so.6.0 00:01:27.614 LIB libspdk_json.a 00:01:27.614 SYMLINK libspdk_rdma_provider.so 00:01:27.614 SO libspdk_rdma_utils.so.1.0 00:01:27.614 SYMLINK libspdk_conf.so 00:01:27.614 SO libspdk_json.so.6.0 00:01:27.871 SYMLINK libspdk_rdma_utils.so 00:01:27.871 SYMLINK libspdk_json.so 00:01:27.871 LIB libspdk_idxd.a 00:01:27.871 SO libspdk_idxd.so.12.0 00:01:27.871 LIB libspdk_vmd.a 00:01:27.871 SYMLINK libspdk_idxd.so 00:01:27.871 SO libspdk_vmd.so.6.0 00:01:28.128 SYMLINK libspdk_vmd.so 00:01:28.128 CC lib/jsonrpc/jsonrpc_server.o 00:01:28.128 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:28.128 CC lib/jsonrpc/jsonrpc_client.o 00:01:28.128 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:28.386 LIB libspdk_jsonrpc.a 00:01:28.386 SO libspdk_jsonrpc.so.6.0 00:01:28.386 SYMLINK libspdk_jsonrpc.so 00:01:28.386 LIB libspdk_env_dpdk.a 00:01:28.386 SO libspdk_env_dpdk.so.14.1 00:01:28.643 SYMLINK libspdk_env_dpdk.so 00:01:28.643 CC lib/rpc/rpc.o 00:01:28.900 LIB libspdk_rpc.a 00:01:28.900 SO libspdk_rpc.so.6.0 00:01:28.900 SYMLINK libspdk_rpc.so 00:01:29.158 CC lib/keyring/keyring.o 00:01:29.158 CC lib/keyring/keyring_rpc.o 00:01:29.158 CC lib/notify/notify.o 00:01:29.159 CC lib/notify/notify_rpc.o 00:01:29.159 CC lib/trace/trace.o 00:01:29.159 CC lib/trace/trace_flags.o 00:01:29.159 CC lib/trace/trace_rpc.o 00:01:29.417 LIB libspdk_notify.a 00:01:29.417 LIB libspdk_keyring.a 00:01:29.417 SO libspdk_keyring.so.1.0 00:01:29.417 SO libspdk_notify.so.6.0 00:01:29.417 LIB libspdk_trace.a 00:01:29.417 SYMLINK libspdk_keyring.so 00:01:29.417 SYMLINK libspdk_notify.so 00:01:29.417 SO libspdk_trace.so.10.0 00:01:29.417 SYMLINK libspdk_trace.so 00:01:29.984 CC lib/thread/thread.o 00:01:29.984 CC lib/thread/iobuf.o 00:01:29.984 CC lib/sock/sock.o 00:01:29.984 CC lib/sock/sock_rpc.o 00:01:30.241 LIB libspdk_sock.a 00:01:30.241 SO libspdk_sock.so.10.0 00:01:30.241 SYMLINK libspdk_sock.so 00:01:30.512 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:30.512 CC lib/nvme/nvme_ctrlr.o 00:01:30.512 CC lib/nvme/nvme_fabric.o 00:01:30.512 CC lib/nvme/nvme_ns_cmd.o 00:01:30.512 CC lib/nvme/nvme_pcie_common.o 00:01:30.512 CC lib/nvme/nvme_ns.o 00:01:30.512 CC lib/nvme/nvme_pcie.o 00:01:30.512 CC lib/nvme/nvme.o 00:01:30.512 CC lib/nvme/nvme_qpair.o 00:01:30.512 CC lib/nvme/nvme_quirks.o 00:01:30.512 CC lib/nvme/nvme_discovery.o 00:01:30.512 CC lib/nvme/nvme_transport.o 00:01:30.512 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:30.512 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:30.512 CC lib/nvme/nvme_tcp.o 00:01:30.512 CC lib/nvme/nvme_opal.o 00:01:30.512 CC lib/nvme/nvme_io_msg.o 00:01:30.512 CC lib/nvme/nvme_poll_group.o 00:01:30.512 CC lib/nvme/nvme_zns.o 00:01:30.512 CC lib/nvme/nvme_stubs.o 00:01:30.512 CC lib/nvme/nvme_auth.o 00:01:30.512 CC lib/nvme/nvme_cuse.o 00:01:30.512 CC lib/nvme/nvme_vfio_user.o 00:01:30.512 CC lib/nvme/nvme_rdma.o 00:01:30.769 LIB libspdk_thread.a 00:01:31.026 SO libspdk_thread.so.10.1 00:01:31.026 SYMLINK libspdk_thread.so 00:01:31.282 CC lib/init/subsystem.o 00:01:31.282 CC lib/init/json_config.o 00:01:31.282 CC lib/init/rpc.o 00:01:31.282 CC lib/init/subsystem_rpc.o 00:01:31.282 CC lib/virtio/virtio.o 00:01:31.282 CC lib/virtio/virtio_vhost_user.o 00:01:31.282 CC lib/virtio/virtio_vfio_user.o 00:01:31.282 CC lib/virtio/virtio_pci.o 00:01:31.282 CC lib/accel/accel_rpc.o 00:01:31.282 CC lib/accel/accel.o 00:01:31.282 CC lib/accel/accel_sw.o 00:01:31.282 CC lib/vfu_tgt/tgt_endpoint.o 00:01:31.282 CC lib/vfu_tgt/tgt_rpc.o 00:01:31.282 CC lib/blob/blobstore.o 00:01:31.282 CC lib/blob/zeroes.o 00:01:31.282 CC lib/blob/blob_bs_dev.o 00:01:31.282 CC lib/blob/request.o 00:01:31.539 LIB libspdk_init.a 00:01:31.539 SO libspdk_init.so.5.0 00:01:31.539 LIB libspdk_virtio.a 00:01:31.539 LIB libspdk_vfu_tgt.a 00:01:31.539 SYMLINK libspdk_init.so 00:01:31.539 SO libspdk_virtio.so.7.0 00:01:31.539 SO libspdk_vfu_tgt.so.3.0 00:01:31.539 SYMLINK libspdk_virtio.so 00:01:31.539 SYMLINK libspdk_vfu_tgt.so 00:01:31.797 CC lib/event/app.o 00:01:31.797 CC lib/event/app_rpc.o 00:01:31.797 CC lib/event/reactor.o 00:01:31.797 CC lib/event/log_rpc.o 00:01:31.797 CC lib/event/scheduler_static.o 00:01:32.055 LIB libspdk_accel.a 00:01:32.055 SO libspdk_accel.so.15.1 00:01:32.055 SYMLINK libspdk_accel.so 00:01:32.055 LIB libspdk_nvme.a 00:01:32.055 LIB libspdk_event.a 00:01:32.055 SO libspdk_event.so.14.0 00:01:32.055 SO libspdk_nvme.so.13.1 00:01:32.315 SYMLINK libspdk_event.so 00:01:32.315 CC lib/bdev/bdev.o 00:01:32.315 CC lib/bdev/bdev_rpc.o 00:01:32.315 CC lib/bdev/bdev_zone.o 00:01:32.315 CC lib/bdev/part.o 00:01:32.315 CC lib/bdev/scsi_nvme.o 00:01:32.315 SYMLINK libspdk_nvme.so 00:01:33.282 LIB libspdk_blob.a 00:01:33.282 SO libspdk_blob.so.11.0 00:01:33.540 SYMLINK libspdk_blob.so 00:01:33.797 CC lib/lvol/lvol.o 00:01:33.797 CC lib/blobfs/blobfs.o 00:01:33.797 CC lib/blobfs/tree.o 00:01:34.055 LIB libspdk_bdev.a 00:01:34.055 SO libspdk_bdev.so.15.1 00:01:34.313 SYMLINK libspdk_bdev.so 00:01:34.313 LIB libspdk_blobfs.a 00:01:34.313 SO libspdk_blobfs.so.10.0 00:01:34.313 LIB libspdk_lvol.a 00:01:34.313 SO libspdk_lvol.so.10.0 00:01:34.313 SYMLINK libspdk_blobfs.so 00:01:34.573 SYMLINK libspdk_lvol.so 00:01:34.573 CC lib/ftl/ftl_core.o 00:01:34.573 CC lib/scsi/dev.o 00:01:34.573 CC lib/nbd/nbd.o 00:01:34.573 CC lib/ftl/ftl_init.o 00:01:34.573 CC lib/ftl/ftl_debug.o 00:01:34.573 CC lib/ftl/ftl_layout.o 00:01:34.573 CC lib/scsi/lun.o 00:01:34.573 CC lib/nbd/nbd_rpc.o 00:01:34.573 CC lib/scsi/port.o 00:01:34.573 CC lib/scsi/scsi.o 00:01:34.573 CC lib/nvmf/ctrlr_discovery.o 00:01:34.573 CC lib/ftl/ftl_io.o 00:01:34.573 CC lib/nvmf/ctrlr.o 00:01:34.573 CC lib/scsi/scsi_bdev.o 00:01:34.573 CC lib/ftl/ftl_sb.o 00:01:34.573 CC lib/ftl/ftl_l2p_flat.o 00:01:34.573 CC lib/scsi/task.o 00:01:34.573 CC lib/scsi/scsi_pr.o 00:01:34.573 CC lib/ftl/ftl_l2p.o 00:01:34.573 CC lib/nvmf/ctrlr_bdev.o 00:01:34.573 CC lib/scsi/scsi_rpc.o 00:01:34.573 CC lib/nvmf/subsystem.o 00:01:34.573 CC lib/ftl/ftl_nv_cache.o 00:01:34.573 CC lib/nvmf/nvmf.o 00:01:34.573 CC lib/ftl/ftl_band.o 00:01:34.573 CC lib/ftl/ftl_band_ops.o 00:01:34.573 CC lib/nvmf/nvmf_rpc.o 00:01:34.573 CC lib/ftl/ftl_writer.o 00:01:34.573 CC lib/nvmf/tcp.o 00:01:34.573 CC lib/nvmf/transport.o 00:01:34.573 CC lib/ublk/ublk.o 00:01:34.573 CC lib/nvmf/stubs.o 00:01:34.573 CC lib/ftl/ftl_rq.o 00:01:34.573 CC lib/ublk/ublk_rpc.o 00:01:34.573 CC lib/nvmf/mdns_server.o 00:01:34.573 CC lib/ftl/ftl_reloc.o 00:01:34.573 CC lib/nvmf/rdma.o 00:01:34.573 CC lib/ftl/ftl_l2p_cache.o 00:01:34.573 CC lib/nvmf/vfio_user.o 00:01:34.573 CC lib/nvmf/auth.o 00:01:34.573 CC lib/ftl/ftl_p2l.o 00:01:34.573 CC lib/ftl/mngt/ftl_mngt.o 00:01:34.573 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:34.573 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:34.573 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:34.573 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:34.573 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:34.573 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:34.573 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:34.573 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:34.573 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:34.573 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:34.573 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:34.573 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:34.573 CC lib/ftl/utils/ftl_conf.o 00:01:34.573 CC lib/ftl/utils/ftl_mempool.o 00:01:34.573 CC lib/ftl/utils/ftl_md.o 00:01:34.573 CC lib/ftl/utils/ftl_property.o 00:01:34.573 CC lib/ftl/utils/ftl_bitmap.o 00:01:34.573 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:34.573 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:34.573 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:34.573 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:34.573 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:34.573 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:34.573 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:34.573 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:34.573 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:34.573 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:34.573 CC lib/ftl/base/ftl_base_dev.o 00:01:34.573 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:34.573 CC lib/ftl/base/ftl_base_bdev.o 00:01:34.573 CC lib/ftl/ftl_trace.o 00:01:35.140 LIB libspdk_nbd.a 00:01:35.140 SO libspdk_nbd.so.7.0 00:01:35.140 LIB libspdk_scsi.a 00:01:35.140 LIB libspdk_ublk.a 00:01:35.140 SYMLINK libspdk_nbd.so 00:01:35.140 SO libspdk_scsi.so.9.0 00:01:35.140 SO libspdk_ublk.so.3.0 00:01:35.140 SYMLINK libspdk_ublk.so 00:01:35.399 SYMLINK libspdk_scsi.so 00:01:35.657 LIB libspdk_ftl.a 00:01:35.657 CC lib/iscsi/conn.o 00:01:35.657 CC lib/iscsi/init_grp.o 00:01:35.657 CC lib/iscsi/iscsi.o 00:01:35.657 CC lib/iscsi/md5.o 00:01:35.657 CC lib/iscsi/portal_grp.o 00:01:35.657 CC lib/iscsi/param.o 00:01:35.657 CC lib/vhost/vhost.o 00:01:35.657 CC lib/vhost/vhost_rpc.o 00:01:35.657 CC lib/iscsi/tgt_node.o 00:01:35.657 CC lib/vhost/vhost_scsi.o 00:01:35.657 CC lib/iscsi/iscsi_subsystem.o 00:01:35.657 CC lib/vhost/vhost_blk.o 00:01:35.657 CC lib/vhost/rte_vhost_user.o 00:01:35.657 CC lib/iscsi/iscsi_rpc.o 00:01:35.657 CC lib/iscsi/task.o 00:01:35.657 SO libspdk_ftl.so.9.0 00:01:35.915 SYMLINK libspdk_ftl.so 00:01:36.174 LIB libspdk_nvmf.a 00:01:36.174 SO libspdk_nvmf.so.18.1 00:01:36.432 LIB libspdk_vhost.a 00:01:36.432 SYMLINK libspdk_nvmf.so 00:01:36.432 SO libspdk_vhost.so.8.0 00:01:36.432 SYMLINK libspdk_vhost.so 00:01:36.432 LIB libspdk_iscsi.a 00:01:36.690 SO libspdk_iscsi.so.8.0 00:01:36.690 SYMLINK libspdk_iscsi.so 00:01:37.253 CC module/vfu_device/vfu_virtio.o 00:01:37.253 CC module/vfu_device/vfu_virtio_blk.o 00:01:37.253 CC module/vfu_device/vfu_virtio_scsi.o 00:01:37.253 CC module/vfu_device/vfu_virtio_rpc.o 00:01:37.253 CC module/env_dpdk/env_dpdk_rpc.o 00:01:37.253 CC module/scheduler/gscheduler/gscheduler.o 00:01:37.253 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:37.253 CC module/accel/error/accel_error.o 00:01:37.253 CC module/accel/error/accel_error_rpc.o 00:01:37.253 CC module/accel/ioat/accel_ioat.o 00:01:37.253 CC module/accel/dsa/accel_dsa.o 00:01:37.253 CC module/accel/dsa/accel_dsa_rpc.o 00:01:37.253 CC module/accel/ioat/accel_ioat_rpc.o 00:01:37.253 CC module/accel/iaa/accel_iaa.o 00:01:37.253 CC module/accel/iaa/accel_iaa_rpc.o 00:01:37.253 CC module/keyring/file/keyring.o 00:01:37.253 CC module/keyring/file/keyring_rpc.o 00:01:37.253 CC module/blob/bdev/blob_bdev.o 00:01:37.253 LIB libspdk_env_dpdk_rpc.a 00:01:37.253 CC module/sock/posix/posix.o 00:01:37.253 CC module/keyring/linux/keyring.o 00:01:37.253 CC module/keyring/linux/keyring_rpc.o 00:01:37.253 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:37.253 SO libspdk_env_dpdk_rpc.so.6.0 00:01:37.510 SYMLINK libspdk_env_dpdk_rpc.so 00:01:37.510 LIB libspdk_scheduler_gscheduler.a 00:01:37.510 LIB libspdk_keyring_file.a 00:01:37.510 LIB libspdk_accel_error.a 00:01:37.510 LIB libspdk_scheduler_dpdk_governor.a 00:01:37.510 LIB libspdk_keyring_linux.a 00:01:37.510 SO libspdk_scheduler_gscheduler.so.4.0 00:01:37.510 SO libspdk_keyring_file.so.1.0 00:01:37.510 LIB libspdk_accel_iaa.a 00:01:37.510 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:37.510 LIB libspdk_accel_ioat.a 00:01:37.510 SO libspdk_accel_error.so.2.0 00:01:37.510 LIB libspdk_scheduler_dynamic.a 00:01:37.510 SO libspdk_keyring_linux.so.1.0 00:01:37.510 SO libspdk_accel_ioat.so.6.0 00:01:37.510 SYMLINK libspdk_scheduler_gscheduler.so 00:01:37.510 SO libspdk_accel_iaa.so.3.0 00:01:37.510 SYMLINK libspdk_keyring_file.so 00:01:37.510 SO libspdk_scheduler_dynamic.so.4.0 00:01:37.510 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:37.510 LIB libspdk_blob_bdev.a 00:01:37.510 LIB libspdk_accel_dsa.a 00:01:37.510 SYMLINK libspdk_accel_error.so 00:01:37.510 SYMLINK libspdk_keyring_linux.so 00:01:37.510 SO libspdk_blob_bdev.so.11.0 00:01:37.510 SO libspdk_accel_dsa.so.5.0 00:01:37.510 SYMLINK libspdk_accel_ioat.so 00:01:37.510 SYMLINK libspdk_scheduler_dynamic.so 00:01:37.510 SYMLINK libspdk_accel_iaa.so 00:01:37.510 SYMLINK libspdk_blob_bdev.so 00:01:37.510 SYMLINK libspdk_accel_dsa.so 00:01:37.510 LIB libspdk_vfu_device.a 00:01:37.766 SO libspdk_vfu_device.so.3.0 00:01:37.766 SYMLINK libspdk_vfu_device.so 00:01:37.766 LIB libspdk_sock_posix.a 00:01:38.024 SO libspdk_sock_posix.so.6.0 00:01:38.024 SYMLINK libspdk_sock_posix.so 00:01:38.024 CC module/bdev/error/vbdev_error_rpc.o 00:01:38.024 CC module/bdev/error/vbdev_error.o 00:01:38.024 CC module/bdev/null/bdev_null_rpc.o 00:01:38.024 CC module/bdev/null/bdev_null.o 00:01:38.024 CC module/bdev/split/vbdev_split.o 00:01:38.024 CC module/bdev/split/vbdev_split_rpc.o 00:01:38.024 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:38.024 CC module/bdev/iscsi/bdev_iscsi.o 00:01:38.024 CC module/bdev/aio/bdev_aio.o 00:01:38.024 CC module/bdev/aio/bdev_aio_rpc.o 00:01:38.024 CC module/blobfs/bdev/blobfs_bdev.o 00:01:38.024 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:38.024 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:38.024 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:38.024 CC module/bdev/lvol/vbdev_lvol.o 00:01:38.024 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:38.024 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:38.024 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:38.024 CC module/bdev/delay/vbdev_delay.o 00:01:38.024 CC module/bdev/passthru/vbdev_passthru.o 00:01:38.024 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:38.024 CC module/bdev/raid/bdev_raid.o 00:01:38.024 CC module/bdev/raid/bdev_raid_sb.o 00:01:38.024 CC module/bdev/raid/bdev_raid_rpc.o 00:01:38.024 CC module/bdev/malloc/bdev_malloc.o 00:01:38.024 CC module/bdev/raid/raid0.o 00:01:38.025 CC module/bdev/raid/raid1.o 00:01:38.025 CC module/bdev/gpt/gpt.o 00:01:38.025 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:38.025 CC module/bdev/gpt/vbdev_gpt.o 00:01:38.025 CC module/bdev/raid/concat.o 00:01:38.025 CC module/bdev/nvme/bdev_nvme.o 00:01:38.025 CC module/bdev/nvme/nvme_rpc.o 00:01:38.025 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:38.025 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:38.025 CC module/bdev/ftl/bdev_ftl.o 00:01:38.025 CC module/bdev/nvme/bdev_mdns_client.o 00:01:38.025 CC module/bdev/nvme/vbdev_opal.o 00:01:38.025 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:38.025 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:38.025 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:38.025 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:38.282 LIB libspdk_bdev_split.a 00:01:38.282 LIB libspdk_blobfs_bdev.a 00:01:38.282 LIB libspdk_bdev_null.a 00:01:38.282 LIB libspdk_bdev_error.a 00:01:38.282 SO libspdk_bdev_split.so.6.0 00:01:38.282 SO libspdk_bdev_null.so.6.0 00:01:38.282 SO libspdk_blobfs_bdev.so.6.0 00:01:38.282 SO libspdk_bdev_error.so.6.0 00:01:38.282 LIB libspdk_bdev_gpt.a 00:01:38.282 LIB libspdk_bdev_aio.a 00:01:38.282 SYMLINK libspdk_bdev_null.so 00:01:38.282 SYMLINK libspdk_bdev_split.so 00:01:38.282 LIB libspdk_bdev_ftl.a 00:01:38.282 LIB libspdk_bdev_passthru.a 00:01:38.282 SO libspdk_bdev_gpt.so.6.0 00:01:38.539 SYMLINK libspdk_blobfs_bdev.so 00:01:38.539 SYMLINK libspdk_bdev_error.so 00:01:38.539 LIB libspdk_bdev_iscsi.a 00:01:38.539 LIB libspdk_bdev_malloc.a 00:01:38.539 SO libspdk_bdev_passthru.so.6.0 00:01:38.539 SO libspdk_bdev_aio.so.6.0 00:01:38.539 LIB libspdk_bdev_delay.a 00:01:38.539 SO libspdk_bdev_ftl.so.6.0 00:01:38.539 LIB libspdk_bdev_zone_block.a 00:01:38.539 SO libspdk_bdev_iscsi.so.6.0 00:01:38.539 SO libspdk_bdev_malloc.so.6.0 00:01:38.539 SO libspdk_bdev_delay.so.6.0 00:01:38.539 SYMLINK libspdk_bdev_gpt.so 00:01:38.539 SO libspdk_bdev_zone_block.so.6.0 00:01:38.539 SYMLINK libspdk_bdev_passthru.so 00:01:38.539 SYMLINK libspdk_bdev_aio.so 00:01:38.539 SYMLINK libspdk_bdev_ftl.so 00:01:38.539 SYMLINK libspdk_bdev_malloc.so 00:01:38.539 SYMLINK libspdk_bdev_iscsi.so 00:01:38.539 SYMLINK libspdk_bdev_zone_block.so 00:01:38.539 SYMLINK libspdk_bdev_delay.so 00:01:38.539 LIB libspdk_bdev_lvol.a 00:01:38.539 LIB libspdk_bdev_virtio.a 00:01:38.539 SO libspdk_bdev_lvol.so.6.0 00:01:38.539 SO libspdk_bdev_virtio.so.6.0 00:01:38.539 SYMLINK libspdk_bdev_lvol.so 00:01:38.539 SYMLINK libspdk_bdev_virtio.so 00:01:38.796 LIB libspdk_bdev_raid.a 00:01:38.796 SO libspdk_bdev_raid.so.6.0 00:01:39.053 SYMLINK libspdk_bdev_raid.so 00:01:39.618 LIB libspdk_bdev_nvme.a 00:01:39.618 SO libspdk_bdev_nvme.so.7.0 00:01:39.876 SYMLINK libspdk_bdev_nvme.so 00:01:40.441 CC module/event/subsystems/keyring/keyring.o 00:01:40.441 CC module/event/subsystems/scheduler/scheduler.o 00:01:40.441 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:40.441 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:40.441 CC module/event/subsystems/vmd/vmd.o 00:01:40.441 CC module/event/subsystems/sock/sock.o 00:01:40.441 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:40.441 CC module/event/subsystems/iobuf/iobuf.o 00:01:40.441 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:40.441 LIB libspdk_event_vhost_blk.a 00:01:40.441 LIB libspdk_event_keyring.a 00:01:40.441 LIB libspdk_event_scheduler.a 00:01:40.699 LIB libspdk_event_vmd.a 00:01:40.699 SO libspdk_event_vhost_blk.so.3.0 00:01:40.699 SO libspdk_event_keyring.so.1.0 00:01:40.699 LIB libspdk_event_iobuf.a 00:01:40.699 LIB libspdk_event_vfu_tgt.a 00:01:40.699 LIB libspdk_event_sock.a 00:01:40.699 SO libspdk_event_scheduler.so.4.0 00:01:40.699 SO libspdk_event_vmd.so.6.0 00:01:40.699 SO libspdk_event_vfu_tgt.so.3.0 00:01:40.699 SO libspdk_event_iobuf.so.3.0 00:01:40.699 SO libspdk_event_sock.so.5.0 00:01:40.699 SYMLINK libspdk_event_vhost_blk.so 00:01:40.699 SYMLINK libspdk_event_keyring.so 00:01:40.699 SYMLINK libspdk_event_scheduler.so 00:01:40.699 SYMLINK libspdk_event_sock.so 00:01:40.699 SYMLINK libspdk_event_vmd.so 00:01:40.699 SYMLINK libspdk_event_vfu_tgt.so 00:01:40.699 SYMLINK libspdk_event_iobuf.so 00:01:40.956 CC module/event/subsystems/accel/accel.o 00:01:40.956 LIB libspdk_event_accel.a 00:01:41.214 SO libspdk_event_accel.so.6.0 00:01:41.214 SYMLINK libspdk_event_accel.so 00:01:41.472 CC module/event/subsystems/bdev/bdev.o 00:01:41.472 LIB libspdk_event_bdev.a 00:01:41.730 SO libspdk_event_bdev.so.6.0 00:01:41.730 SYMLINK libspdk_event_bdev.so 00:01:41.988 CC module/event/subsystems/ublk/ublk.o 00:01:41.988 CC module/event/subsystems/nbd/nbd.o 00:01:41.988 CC module/event/subsystems/scsi/scsi.o 00:01:41.988 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:41.988 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:41.988 LIB libspdk_event_nbd.a 00:01:41.988 LIB libspdk_event_ublk.a 00:01:41.988 SO libspdk_event_nbd.so.6.0 00:01:41.988 LIB libspdk_event_scsi.a 00:01:42.246 SO libspdk_event_ublk.so.3.0 00:01:42.246 SO libspdk_event_scsi.so.6.0 00:01:42.246 SYMLINK libspdk_event_ublk.so 00:01:42.246 LIB libspdk_event_nvmf.a 00:01:42.246 SYMLINK libspdk_event_nbd.so 00:01:42.246 SO libspdk_event_nvmf.so.6.0 00:01:42.246 SYMLINK libspdk_event_scsi.so 00:01:42.246 SYMLINK libspdk_event_nvmf.so 00:01:42.504 CC module/event/subsystems/iscsi/iscsi.o 00:01:42.504 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:42.504 LIB libspdk_event_iscsi.a 00:01:42.504 LIB libspdk_event_vhost_scsi.a 00:01:42.762 SO libspdk_event_iscsi.so.6.0 00:01:42.762 SO libspdk_event_vhost_scsi.so.3.0 00:01:42.762 SYMLINK libspdk_event_iscsi.so 00:01:42.762 SYMLINK libspdk_event_vhost_scsi.so 00:01:42.762 SO libspdk.so.6.0 00:01:42.762 SYMLINK libspdk.so 00:01:43.332 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:43.332 CC app/spdk_lspci/spdk_lspci.o 00:01:43.332 CC app/spdk_nvme_identify/identify.o 00:01:43.332 CC app/spdk_top/spdk_top.o 00:01:43.332 CC app/trace_record/trace_record.o 00:01:43.332 CC app/spdk_nvme_discover/discovery_aer.o 00:01:43.332 CC app/spdk_nvme_perf/perf.o 00:01:43.332 CXX app/trace/trace.o 00:01:43.332 TEST_HEADER include/spdk/assert.h 00:01:43.332 TEST_HEADER include/spdk/accel_module.h 00:01:43.332 TEST_HEADER include/spdk/accel.h 00:01:43.332 TEST_HEADER include/spdk/barrier.h 00:01:43.332 TEST_HEADER include/spdk/base64.h 00:01:43.332 TEST_HEADER include/spdk/bdev.h 00:01:43.332 TEST_HEADER include/spdk/bdev_zone.h 00:01:43.332 CC test/rpc_client/rpc_client_test.o 00:01:43.332 TEST_HEADER include/spdk/bdev_module.h 00:01:43.332 TEST_HEADER include/spdk/bit_array.h 00:01:43.332 TEST_HEADER include/spdk/bit_pool.h 00:01:43.332 TEST_HEADER include/spdk/blob_bdev.h 00:01:43.332 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:43.332 TEST_HEADER include/spdk/blobfs.h 00:01:43.332 TEST_HEADER include/spdk/blob.h 00:01:43.332 TEST_HEADER include/spdk/conf.h 00:01:43.332 TEST_HEADER include/spdk/config.h 00:01:43.332 TEST_HEADER include/spdk/crc16.h 00:01:43.332 TEST_HEADER include/spdk/cpuset.h 00:01:43.332 CC app/nvmf_tgt/nvmf_main.o 00:01:43.332 TEST_HEADER include/spdk/crc32.h 00:01:43.332 TEST_HEADER include/spdk/crc64.h 00:01:43.332 CC app/spdk_dd/spdk_dd.o 00:01:43.332 TEST_HEADER include/spdk/dma.h 00:01:43.332 TEST_HEADER include/spdk/dif.h 00:01:43.332 TEST_HEADER include/spdk/endian.h 00:01:43.332 TEST_HEADER include/spdk/env_dpdk.h 00:01:43.332 TEST_HEADER include/spdk/env.h 00:01:43.332 CC app/iscsi_tgt/iscsi_tgt.o 00:01:43.332 TEST_HEADER include/spdk/event.h 00:01:43.332 TEST_HEADER include/spdk/fd_group.h 00:01:43.332 TEST_HEADER include/spdk/fd.h 00:01:43.332 TEST_HEADER include/spdk/file.h 00:01:43.332 TEST_HEADER include/spdk/ftl.h 00:01:43.332 TEST_HEADER include/spdk/gpt_spec.h 00:01:43.332 TEST_HEADER include/spdk/hexlify.h 00:01:43.332 TEST_HEADER include/spdk/histogram_data.h 00:01:43.332 TEST_HEADER include/spdk/idxd.h 00:01:43.332 TEST_HEADER include/spdk/idxd_spec.h 00:01:43.332 TEST_HEADER include/spdk/ioat.h 00:01:43.332 TEST_HEADER include/spdk/init.h 00:01:43.332 TEST_HEADER include/spdk/ioat_spec.h 00:01:43.332 TEST_HEADER include/spdk/json.h 00:01:43.332 TEST_HEADER include/spdk/iscsi_spec.h 00:01:43.332 TEST_HEADER include/spdk/jsonrpc.h 00:01:43.332 TEST_HEADER include/spdk/keyring.h 00:01:43.332 TEST_HEADER include/spdk/keyring_module.h 00:01:43.332 TEST_HEADER include/spdk/likely.h 00:01:43.332 TEST_HEADER include/spdk/log.h 00:01:43.332 TEST_HEADER include/spdk/lvol.h 00:01:43.332 CC app/spdk_tgt/spdk_tgt.o 00:01:43.332 TEST_HEADER include/spdk/memory.h 00:01:43.332 TEST_HEADER include/spdk/mmio.h 00:01:43.332 TEST_HEADER include/spdk/notify.h 00:01:43.332 TEST_HEADER include/spdk/nbd.h 00:01:43.332 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:43.332 TEST_HEADER include/spdk/nvme_intel.h 00:01:43.332 TEST_HEADER include/spdk/nvme.h 00:01:43.332 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:43.332 TEST_HEADER include/spdk/nvme_spec.h 00:01:43.332 TEST_HEADER include/spdk/nvme_zns.h 00:01:43.332 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:43.332 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:43.332 TEST_HEADER include/spdk/nvmf_spec.h 00:01:43.332 TEST_HEADER include/spdk/nvmf_transport.h 00:01:43.332 TEST_HEADER include/spdk/nvmf.h 00:01:43.332 TEST_HEADER include/spdk/opal.h 00:01:43.332 TEST_HEADER include/spdk/opal_spec.h 00:01:43.332 TEST_HEADER include/spdk/pci_ids.h 00:01:43.332 TEST_HEADER include/spdk/pipe.h 00:01:43.332 TEST_HEADER include/spdk/reduce.h 00:01:43.332 TEST_HEADER include/spdk/rpc.h 00:01:43.332 TEST_HEADER include/spdk/queue.h 00:01:43.332 TEST_HEADER include/spdk/scheduler.h 00:01:43.332 TEST_HEADER include/spdk/scsi.h 00:01:43.332 TEST_HEADER include/spdk/scsi_spec.h 00:01:43.332 TEST_HEADER include/spdk/sock.h 00:01:43.332 TEST_HEADER include/spdk/string.h 00:01:43.332 TEST_HEADER include/spdk/thread.h 00:01:43.332 CC examples/ioat/verify/verify.o 00:01:43.332 TEST_HEADER include/spdk/stdinc.h 00:01:43.332 TEST_HEADER include/spdk/trace.h 00:01:43.332 TEST_HEADER include/spdk/trace_parser.h 00:01:43.332 TEST_HEADER include/spdk/tree.h 00:01:43.332 TEST_HEADER include/spdk/ublk.h 00:01:43.332 TEST_HEADER include/spdk/util.h 00:01:43.332 TEST_HEADER include/spdk/uuid.h 00:01:43.332 TEST_HEADER include/spdk/version.h 00:01:43.332 CC examples/ioat/perf/perf.o 00:01:43.332 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:43.332 TEST_HEADER include/spdk/vhost.h 00:01:43.332 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:43.332 TEST_HEADER include/spdk/vmd.h 00:01:43.332 TEST_HEADER include/spdk/xor.h 00:01:43.332 TEST_HEADER include/spdk/zipf.h 00:01:43.332 CXX test/cpp_headers/accel_module.o 00:01:43.332 CXX test/cpp_headers/assert.o 00:01:43.332 CXX test/cpp_headers/accel.o 00:01:43.332 CXX test/cpp_headers/barrier.o 00:01:43.332 CXX test/cpp_headers/base64.o 00:01:43.332 CXX test/cpp_headers/bdev.o 00:01:43.332 CXX test/cpp_headers/bdev_module.o 00:01:43.332 CXX test/cpp_headers/bdev_zone.o 00:01:43.332 CXX test/cpp_headers/bit_pool.o 00:01:43.332 CXX test/cpp_headers/blob_bdev.o 00:01:43.332 CXX test/cpp_headers/bit_array.o 00:01:43.332 CXX test/cpp_headers/blobfs.o 00:01:43.332 CXX test/cpp_headers/blob.o 00:01:43.332 CXX test/cpp_headers/blobfs_bdev.o 00:01:43.332 CXX test/cpp_headers/conf.o 00:01:43.332 CXX test/cpp_headers/config.o 00:01:43.332 CXX test/cpp_headers/cpuset.o 00:01:43.332 CXX test/cpp_headers/crc16.o 00:01:43.332 CXX test/cpp_headers/crc32.o 00:01:43.332 CC examples/util/zipf/zipf.o 00:01:43.332 CXX test/cpp_headers/crc64.o 00:01:43.332 CXX test/cpp_headers/dif.o 00:01:43.332 CXX test/cpp_headers/dma.o 00:01:43.332 CXX test/cpp_headers/endian.o 00:01:43.332 CXX test/cpp_headers/env_dpdk.o 00:01:43.332 CXX test/cpp_headers/env.o 00:01:43.332 CXX test/cpp_headers/event.o 00:01:43.332 CXX test/cpp_headers/fd_group.o 00:01:43.332 CXX test/cpp_headers/gpt_spec.o 00:01:43.332 CXX test/cpp_headers/ftl.o 00:01:43.332 CXX test/cpp_headers/file.o 00:01:43.332 CXX test/cpp_headers/fd.o 00:01:43.332 CXX test/cpp_headers/idxd.o 00:01:43.332 CXX test/cpp_headers/hexlify.o 00:01:43.332 CXX test/cpp_headers/idxd_spec.o 00:01:43.332 CXX test/cpp_headers/init.o 00:01:43.332 CXX test/cpp_headers/histogram_data.o 00:01:43.332 CXX test/cpp_headers/ioat_spec.o 00:01:43.332 CXX test/cpp_headers/ioat.o 00:01:43.332 CXX test/cpp_headers/iscsi_spec.o 00:01:43.332 CXX test/cpp_headers/json.o 00:01:43.332 CXX test/cpp_headers/keyring_module.o 00:01:43.332 CXX test/cpp_headers/jsonrpc.o 00:01:43.332 CXX test/cpp_headers/keyring.o 00:01:43.332 CXX test/cpp_headers/likely.o 00:01:43.332 CXX test/cpp_headers/memory.o 00:01:43.332 CXX test/cpp_headers/log.o 00:01:43.332 CXX test/cpp_headers/lvol.o 00:01:43.332 CXX test/cpp_headers/mmio.o 00:01:43.332 CXX test/cpp_headers/nvme.o 00:01:43.332 CXX test/cpp_headers/nbd.o 00:01:43.332 CXX test/cpp_headers/nvme_intel.o 00:01:43.332 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:43.332 CXX test/cpp_headers/notify.o 00:01:43.332 CXX test/cpp_headers/nvme_ocssd.o 00:01:43.332 CXX test/cpp_headers/nvmf_cmd.o 00:01:43.332 CXX test/cpp_headers/nvme_zns.o 00:01:43.332 CXX test/cpp_headers/nvme_spec.o 00:01:43.332 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:43.332 CXX test/cpp_headers/nvmf.o 00:01:43.332 CXX test/cpp_headers/nvmf_transport.o 00:01:43.332 CXX test/cpp_headers/nvmf_spec.o 00:01:43.332 CXX test/cpp_headers/opal.o 00:01:43.332 CXX test/cpp_headers/pci_ids.o 00:01:43.332 CXX test/cpp_headers/opal_spec.o 00:01:43.333 CXX test/cpp_headers/pipe.o 00:01:43.333 CXX test/cpp_headers/queue.o 00:01:43.333 CXX test/cpp_headers/reduce.o 00:01:43.333 CC app/fio/nvme/fio_plugin.o 00:01:43.333 CC test/app/jsoncat/jsoncat.o 00:01:43.333 LINK spdk_lspci 00:01:43.333 CC test/env/memory/memory_ut.o 00:01:43.333 CC test/app/histogram_perf/histogram_perf.o 00:01:43.333 CC test/thread/poller_perf/poller_perf.o 00:01:43.333 CC app/fio/bdev/fio_plugin.o 00:01:43.333 CC test/env/vtophys/vtophys.o 00:01:43.333 CC test/app/stub/stub.o 00:01:43.333 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:43.333 CC test/env/pci/pci_ut.o 00:01:43.333 CC test/app/bdev_svc/bdev_svc.o 00:01:43.623 LINK rpc_client_test 00:01:43.623 LINK spdk_nvme_discover 00:01:43.623 LINK nvmf_tgt 00:01:43.623 CC test/dma/test_dma/test_dma.o 00:01:43.623 LINK interrupt_tgt 00:01:43.885 LINK zipf 00:01:43.885 LINK iscsi_tgt 00:01:43.885 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:43.885 LINK spdk_tgt 00:01:43.885 LINK jsoncat 00:01:43.885 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:43.885 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:43.885 LINK verify 00:01:43.885 LINK ioat_perf 00:01:43.885 CC test/env/mem_callbacks/mem_callbacks.o 00:01:43.885 CXX test/cpp_headers/rpc.o 00:01:43.885 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:43.885 CXX test/cpp_headers/scheduler.o 00:01:43.885 CXX test/cpp_headers/scsi.o 00:01:43.885 CXX test/cpp_headers/scsi_spec.o 00:01:43.885 LINK spdk_trace_record 00:01:43.885 CXX test/cpp_headers/sock.o 00:01:43.885 CXX test/cpp_headers/stdinc.o 00:01:43.885 CXX test/cpp_headers/string.o 00:01:43.885 CXX test/cpp_headers/thread.o 00:01:43.885 CXX test/cpp_headers/trace.o 00:01:43.885 CXX test/cpp_headers/trace_parser.o 00:01:43.885 CXX test/cpp_headers/tree.o 00:01:43.885 CXX test/cpp_headers/ublk.o 00:01:43.885 CXX test/cpp_headers/util.o 00:01:43.885 CXX test/cpp_headers/uuid.o 00:01:43.885 LINK spdk_dd 00:01:43.885 CXX test/cpp_headers/vfio_user_pci.o 00:01:43.885 CXX test/cpp_headers/vfio_user_spec.o 00:01:43.885 CXX test/cpp_headers/version.o 00:01:43.885 CXX test/cpp_headers/vhost.o 00:01:43.885 CXX test/cpp_headers/vmd.o 00:01:43.885 CXX test/cpp_headers/xor.o 00:01:43.885 CXX test/cpp_headers/zipf.o 00:01:43.885 LINK histogram_perf 00:01:43.885 LINK poller_perf 00:01:43.885 LINK vtophys 00:01:43.885 LINK spdk_trace 00:01:44.142 LINK stub 00:01:44.142 LINK bdev_svc 00:01:44.142 LINK env_dpdk_post_init 00:01:44.142 LINK test_dma 00:01:44.142 LINK pci_ut 00:01:44.142 CC examples/sock/hello_world/hello_sock.o 00:01:44.142 CC examples/idxd/perf/perf.o 00:01:44.400 CC examples/vmd/led/led.o 00:01:44.400 CC examples/vmd/lsvmd/lsvmd.o 00:01:44.400 CC examples/thread/thread/thread_ex.o 00:01:44.400 LINK spdk_top 00:01:44.400 CC app/vhost/vhost.o 00:01:44.400 LINK vhost_fuzz 00:01:44.400 LINK spdk_bdev 00:01:44.400 LINK spdk_nvme 00:01:44.400 LINK nvme_fuzz 00:01:44.400 LINK spdk_nvme_identify 00:01:44.400 LINK lsvmd 00:01:44.400 LINK led 00:01:44.400 CC test/event/reactor_perf/reactor_perf.o 00:01:44.400 CC test/event/event_perf/event_perf.o 00:01:44.400 CC test/event/reactor/reactor.o 00:01:44.400 LINK hello_sock 00:01:44.400 LINK spdk_nvme_perf 00:01:44.400 CC test/event/app_repeat/app_repeat.o 00:01:44.659 CC test/event/scheduler/scheduler.o 00:01:44.659 LINK thread 00:01:44.659 LINK vhost 00:01:44.659 LINK idxd_perf 00:01:44.659 LINK mem_callbacks 00:01:44.659 CC test/nvme/aer/aer.o 00:01:44.659 CC test/nvme/boot_partition/boot_partition.o 00:01:44.659 CC test/nvme/sgl/sgl.o 00:01:44.659 CC test/nvme/overhead/overhead.o 00:01:44.659 CC test/nvme/compliance/nvme_compliance.o 00:01:44.659 CC test/nvme/fused_ordering/fused_ordering.o 00:01:44.659 CC test/nvme/connect_stress/connect_stress.o 00:01:44.659 CC test/nvme/e2edp/nvme_dp.o 00:01:44.659 CC test/nvme/reserve/reserve.o 00:01:44.659 CC test/nvme/fdp/fdp.o 00:01:44.659 CC test/nvme/simple_copy/simple_copy.o 00:01:44.659 CC test/nvme/startup/startup.o 00:01:44.659 CC test/nvme/reset/reset.o 00:01:44.659 CC test/nvme/err_injection/err_injection.o 00:01:44.659 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:44.659 CC test/nvme/cuse/cuse.o 00:01:44.659 LINK reactor_perf 00:01:44.659 LINK reactor 00:01:44.659 CC test/blobfs/mkfs/mkfs.o 00:01:44.659 LINK event_perf 00:01:44.659 CC test/accel/dif/dif.o 00:01:44.659 LINK app_repeat 00:01:44.659 CC test/lvol/esnap/esnap.o 00:01:44.918 LINK scheduler 00:01:44.918 LINK boot_partition 00:01:44.918 LINK startup 00:01:44.918 LINK connect_stress 00:01:44.918 LINK memory_ut 00:01:44.918 LINK fused_ordering 00:01:44.918 LINK err_injection 00:01:44.918 LINK doorbell_aers 00:01:44.918 LINK reserve 00:01:44.918 LINK mkfs 00:01:44.918 LINK sgl 00:01:44.918 LINK simple_copy 00:01:44.918 LINK reset 00:01:44.918 LINK aer 00:01:44.918 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:44.918 LINK nvme_dp 00:01:44.918 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:44.918 CC examples/nvme/abort/abort.o 00:01:44.918 CC examples/nvme/arbitration/arbitration.o 00:01:44.918 LINK overhead 00:01:44.918 CC examples/nvme/hotplug/hotplug.o 00:01:44.918 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:44.918 CC examples/nvme/reconnect/reconnect.o 00:01:44.918 CC examples/nvme/hello_world/hello_world.o 00:01:44.918 LINK nvme_compliance 00:01:44.918 LINK fdp 00:01:44.918 CC examples/accel/perf/accel_perf.o 00:01:45.176 CC examples/blob/cli/blobcli.o 00:01:45.176 CC examples/blob/hello_world/hello_blob.o 00:01:45.176 LINK dif 00:01:45.176 LINK cmb_copy 00:01:45.176 LINK pmr_persistence 00:01:45.176 LINK hello_world 00:01:45.176 LINK hotplug 00:01:45.176 LINK arbitration 00:01:45.176 LINK reconnect 00:01:45.176 LINK abort 00:01:45.176 LINK hello_blob 00:01:45.434 LINK iscsi_fuzz 00:01:45.434 LINK nvme_manage 00:01:45.434 LINK accel_perf 00:01:45.434 LINK blobcli 00:01:45.693 CC test/bdev/bdevio/bdevio.o 00:01:45.693 LINK cuse 00:01:45.951 CC examples/bdev/bdevperf/bdevperf.o 00:01:45.951 CC examples/bdev/hello_world/hello_bdev.o 00:01:45.951 LINK bdevio 00:01:46.210 LINK hello_bdev 00:01:46.470 LINK bdevperf 00:01:47.067 CC examples/nvmf/nvmf/nvmf.o 00:01:47.067 LINK nvmf 00:01:48.004 LINK esnap 00:01:48.572 00:01:48.572 real 0m43.262s 00:01:48.572 user 6m30.424s 00:01:48.572 sys 3m19.854s 00:01:48.572 16:43:54 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:48.572 16:43:54 make -- common/autotest_common.sh@10 -- $ set +x 00:01:48.572 ************************************ 00:01:48.572 END TEST make 00:01:48.572 ************************************ 00:01:48.572 16:43:54 -- common/autotest_common.sh@1142 -- $ return 0 00:01:48.572 16:43:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:48.572 16:43:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:48.572 16:43:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:48.572 16:43:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.572 16:43:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:48.572 16:43:54 -- pm/common@44 -- $ pid=3984186 00:01:48.572 16:43:54 -- pm/common@50 -- $ kill -TERM 3984186 00:01:48.572 16:43:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.572 16:43:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:48.572 16:43:54 -- pm/common@44 -- $ pid=3984187 00:01:48.572 16:43:54 -- pm/common@50 -- $ kill -TERM 3984187 00:01:48.572 16:43:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.572 16:43:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:48.572 16:43:54 -- pm/common@44 -- $ pid=3984189 00:01:48.572 16:43:54 -- pm/common@50 -- $ kill -TERM 3984189 00:01:48.572 16:43:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.572 16:43:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:48.572 16:43:54 -- pm/common@44 -- $ pid=3984212 00:01:48.572 16:43:54 -- pm/common@50 -- $ sudo -E kill -TERM 3984212 00:01:48.572 16:43:55 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:48.572 16:43:55 -- nvmf/common.sh@7 -- # uname -s 00:01:48.572 16:43:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:48.572 16:43:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:48.572 16:43:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:48.572 16:43:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:48.572 16:43:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:48.572 16:43:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:48.572 16:43:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:48.572 16:43:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:48.572 16:43:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:48.572 16:43:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:48.572 16:43:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:01:48.572 16:43:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:01:48.572 16:43:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:48.572 16:43:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:48.572 16:43:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:48.572 16:43:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:48.572 16:43:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:48.572 16:43:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:48.572 16:43:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:48.572 16:43:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:48.572 16:43:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.572 16:43:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.572 16:43:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.572 16:43:55 -- paths/export.sh@5 -- # export PATH 00:01:48.572 16:43:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.572 16:43:55 -- nvmf/common.sh@47 -- # : 0 00:01:48.572 16:43:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:48.572 16:43:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:48.572 16:43:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:48.572 16:43:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:48.572 16:43:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:48.572 16:43:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:48.572 16:43:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:48.572 16:43:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:48.572 16:43:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:48.572 16:43:55 -- spdk/autotest.sh@32 -- # uname -s 00:01:48.572 16:43:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:48.572 16:43:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:48.572 16:43:55 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:48.572 16:43:55 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:48.572 16:43:55 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:48.572 16:43:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:48.572 16:43:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:48.572 16:43:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:48.572 16:43:55 -- spdk/autotest.sh@48 -- # udevadm_pid=4042957 00:01:48.572 16:43:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:48.572 16:43:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:48.572 16:43:55 -- pm/common@17 -- # local monitor 00:01:48.572 16:43:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.572 16:43:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.572 16:43:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.572 16:43:55 -- pm/common@21 -- # date +%s 00:01:48.572 16:43:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.572 16:43:55 -- pm/common@21 -- # date +%s 00:01:48.572 16:43:55 -- pm/common@25 -- # sleep 1 00:01:48.572 16:43:55 -- pm/common@21 -- # date +%s 00:01:48.572 16:43:55 -- pm/common@21 -- # date +%s 00:01:48.572 16:43:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721054635 00:01:48.572 16:43:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721054635 00:01:48.572 16:43:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721054635 00:01:48.572 16:43:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721054635 00:01:48.572 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721054635_collect-vmstat.pm.log 00:01:48.572 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721054635_collect-cpu-load.pm.log 00:01:48.572 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721054635_collect-cpu-temp.pm.log 00:01:48.572 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721054635_collect-bmc-pm.bmc.pm.log 00:01:49.508 16:43:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:49.508 16:43:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:49.508 16:43:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:01:49.508 16:43:56 -- common/autotest_common.sh@10 -- # set +x 00:01:49.508 16:43:56 -- spdk/autotest.sh@59 -- # create_test_list 00:01:49.509 16:43:56 -- common/autotest_common.sh@746 -- # xtrace_disable 00:01:49.509 16:43:56 -- common/autotest_common.sh@10 -- # set +x 00:01:49.767 16:43:56 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:49.767 16:43:56 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.767 16:43:56 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.767 16:43:56 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:49.767 16:43:56 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.767 16:43:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:49.767 16:43:56 -- common/autotest_common.sh@1455 -- # uname 00:01:49.767 16:43:56 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:49.767 16:43:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:49.767 16:43:56 -- common/autotest_common.sh@1475 -- # uname 00:01:49.767 16:43:56 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:49.767 16:43:56 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:49.767 16:43:56 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:49.767 16:43:56 -- spdk/autotest.sh@72 -- # hash lcov 00:01:49.767 16:43:56 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:49.767 16:43:56 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:49.767 --rc lcov_branch_coverage=1 00:01:49.767 --rc lcov_function_coverage=1 00:01:49.767 --rc genhtml_branch_coverage=1 00:01:49.767 --rc genhtml_function_coverage=1 00:01:49.767 --rc genhtml_legend=1 00:01:49.768 --rc geninfo_all_blocks=1 00:01:49.768 ' 00:01:49.768 16:43:56 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:49.768 --rc lcov_branch_coverage=1 00:01:49.768 --rc lcov_function_coverage=1 00:01:49.768 --rc genhtml_branch_coverage=1 00:01:49.768 --rc genhtml_function_coverage=1 00:01:49.768 --rc genhtml_legend=1 00:01:49.768 --rc geninfo_all_blocks=1 00:01:49.768 ' 00:01:49.768 16:43:56 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:49.768 --rc lcov_branch_coverage=1 00:01:49.768 --rc lcov_function_coverage=1 00:01:49.768 --rc genhtml_branch_coverage=1 00:01:49.768 --rc genhtml_function_coverage=1 00:01:49.768 --rc genhtml_legend=1 00:01:49.768 --rc geninfo_all_blocks=1 00:01:49.768 --no-external' 00:01:49.768 16:43:56 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:49.768 --rc lcov_branch_coverage=1 00:01:49.768 --rc lcov_function_coverage=1 00:01:49.768 --rc genhtml_branch_coverage=1 00:01:49.768 --rc genhtml_function_coverage=1 00:01:49.768 --rc genhtml_legend=1 00:01:49.768 --rc geninfo_all_blocks=1 00:01:49.768 --no-external' 00:01:49.768 16:43:56 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:49.768 lcov: LCOV version 1.14 00:01:49.768 16:43:56 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:53.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:01:53.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:01:53.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:08.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:08.833 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:15.398 16:44:20 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:15.398 16:44:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:15.398 16:44:20 -- common/autotest_common.sh@10 -- # set +x 00:02:15.398 16:44:20 -- spdk/autotest.sh@91 -- # rm -f 00:02:15.398 16:44:20 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:16.773 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:16.773 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:16.773 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:16.773 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:16.773 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:16.773 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:17.031 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:17.031 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:17.031 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:17.031 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:17.031 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:17.031 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:17.031 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:17.031 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:17.031 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:17.031 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:17.031 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:17.290 16:44:23 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:17.290 16:44:23 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:17.290 16:44:23 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:17.290 16:44:23 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:17.290 16:44:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:17.290 16:44:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:17.290 16:44:23 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:17.290 16:44:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:17.290 16:44:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:17.290 16:44:23 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:17.290 16:44:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:17.290 16:44:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:17.290 16:44:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:17.290 16:44:23 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:17.290 16:44:23 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:17.290 No valid GPT data, bailing 00:02:17.290 16:44:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:17.290 16:44:23 -- scripts/common.sh@391 -- # pt= 00:02:17.290 16:44:23 -- scripts/common.sh@392 -- # return 1 00:02:17.290 16:44:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:17.290 1+0 records in 00:02:17.290 1+0 records out 00:02:17.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428602 s, 245 MB/s 00:02:17.290 16:44:23 -- spdk/autotest.sh@118 -- # sync 00:02:17.290 16:44:23 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:17.290 16:44:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:17.290 16:44:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:22.635 16:44:28 -- spdk/autotest.sh@124 -- # uname -s 00:02:22.635 16:44:28 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:22.635 16:44:28 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:22.635 16:44:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:22.635 16:44:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:22.635 16:44:28 -- common/autotest_common.sh@10 -- # set +x 00:02:22.635 ************************************ 00:02:22.635 START TEST setup.sh 00:02:22.635 ************************************ 00:02:22.635 16:44:28 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:22.635 * Looking for test storage... 00:02:22.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:22.635 16:44:28 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:22.635 16:44:28 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:22.635 16:44:28 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:22.635 16:44:28 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:22.635 16:44:28 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:22.635 16:44:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:22.635 ************************************ 00:02:22.635 START TEST acl 00:02:22.635 ************************************ 00:02:22.635 16:44:28 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:22.635 * Looking for test storage... 00:02:22.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:22.635 16:44:28 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:22.635 16:44:28 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:22.635 16:44:28 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:22.635 16:44:28 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:22.635 16:44:28 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:22.635 16:44:28 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:22.635 16:44:28 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:22.635 16:44:28 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:22.635 16:44:28 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:22.635 16:44:28 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:22.635 16:44:28 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:22.635 16:44:28 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:22.635 16:44:28 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:22.635 16:44:28 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:22.635 16:44:28 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:22.635 16:44:28 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:25.171 16:44:31 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:25.171 16:44:31 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:25.171 16:44:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.171 16:44:31 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:25.171 16:44:31 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.171 16:44:31 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:27.073 Hugepages 00:02:27.073 node hugesize free / total 00:02:27.073 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:27.073 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:27.073 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.073 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:27.073 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:27.073 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.073 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:27.073 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:27.073 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.073 00:02:27.073 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:27.073 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:27.073 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:27.073 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:27.332 16:44:33 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:27.332 16:44:33 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:27.332 16:44:33 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:27.332 16:44:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:27.332 ************************************ 00:02:27.332 START TEST denied 00:02:27.332 ************************************ 00:02:27.332 16:44:33 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:27.332 16:44:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:27.332 16:44:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:27.332 16:44:33 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:27.332 16:44:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:27.332 16:44:33 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:30.620 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:30.620 16:44:36 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:30.620 16:44:36 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:30.620 16:44:36 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:30.620 16:44:36 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:30.620 16:44:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:30.620 16:44:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:30.620 16:44:36 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:30.620 16:44:36 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:30.620 16:44:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:30.620 16:44:36 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:34.814 00:02:34.814 real 0m6.788s 00:02:34.814 user 0m2.231s 00:02:34.814 sys 0m3.782s 00:02:34.814 16:44:40 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:34.814 16:44:40 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:34.814 ************************************ 00:02:34.814 END TEST denied 00:02:34.814 ************************************ 00:02:34.814 16:44:40 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:34.814 16:44:40 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:34.814 16:44:40 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:34.814 16:44:40 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:34.814 16:44:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:34.814 ************************************ 00:02:34.814 START TEST allowed 00:02:34.814 ************************************ 00:02:34.814 16:44:40 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:34.814 16:44:40 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:34.814 16:44:40 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:34.814 16:44:40 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:34.814 16:44:40 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.814 16:44:40 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:38.100 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:38.100 16:44:44 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:38.100 16:44:44 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:38.100 16:44:44 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:38.100 16:44:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.100 16:44:44 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.386 00:02:41.386 real 0m6.578s 00:02:41.386 user 0m1.923s 00:02:41.386 sys 0m3.643s 00:02:41.386 16:44:47 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:41.386 16:44:47 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:41.386 ************************************ 00:02:41.386 END TEST allowed 00:02:41.386 ************************************ 00:02:41.386 16:44:47 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:41.386 00:02:41.386 real 0m18.935s 00:02:41.386 user 0m6.154s 00:02:41.386 sys 0m11.119s 00:02:41.386 16:44:47 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:41.386 16:44:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:41.386 ************************************ 00:02:41.386 END TEST acl 00:02:41.386 ************************************ 00:02:41.386 16:44:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:41.386 16:44:47 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:41.386 16:44:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:41.386 16:44:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.386 16:44:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:41.386 ************************************ 00:02:41.386 START TEST hugepages 00:02:41.386 ************************************ 00:02:41.386 16:44:47 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:41.386 * Looking for test storage... 00:02:41.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173468328 kB' 'MemAvailable: 176340376 kB' 'Buffers: 3896 kB' 'Cached: 10139420 kB' 'SwapCached: 0 kB' 'Active: 7149028 kB' 'Inactive: 3507524 kB' 'Active(anon): 6757020 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517016 kB' 'Mapped: 208840 kB' 'Shmem: 6243784 kB' 'KReclaimable: 233932 kB' 'Slab: 813584 kB' 'SReclaimable: 233932 kB' 'SUnreclaim: 579652 kB' 'KernelStack: 20512 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 8278064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315452 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.386 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.387 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:41.388 16:44:47 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:41.388 16:44:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:41.388 16:44:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.388 16:44:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:41.388 ************************************ 00:02:41.388 START TEST default_setup 00:02:41.388 ************************************ 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.388 16:44:47 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:43.929 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:43.929 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:44.885 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175604668 kB' 'MemAvailable: 178476668 kB' 'Buffers: 3896 kB' 'Cached: 10139528 kB' 'SwapCached: 0 kB' 'Active: 7171756 kB' 'Inactive: 3507524 kB' 'Active(anon): 6779748 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540112 kB' 'Mapped: 209288 kB' 'Shmem: 6243892 kB' 'KReclaimable: 233836 kB' 'Slab: 811924 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578088 kB' 'KernelStack: 20656 kB' 'PageTables: 9312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8304252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315600 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.886 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175605180 kB' 'MemAvailable: 178477180 kB' 'Buffers: 3896 kB' 'Cached: 10139532 kB' 'SwapCached: 0 kB' 'Active: 7172452 kB' 'Inactive: 3507524 kB' 'Active(anon): 6780444 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539780 kB' 'Mapped: 209616 kB' 'Shmem: 6243896 kB' 'KReclaimable: 233836 kB' 'Slab: 811860 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578024 kB' 'KernelStack: 20832 kB' 'PageTables: 9628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8304520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315536 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.887 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175608012 kB' 'MemAvailable: 178480012 kB' 'Buffers: 3896 kB' 'Cached: 10139548 kB' 'SwapCached: 0 kB' 'Active: 7166584 kB' 'Inactive: 3507524 kB' 'Active(anon): 6774576 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533900 kB' 'Mapped: 208764 kB' 'Shmem: 6243912 kB' 'KReclaimable: 233836 kB' 'Slab: 811864 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578028 kB' 'KernelStack: 20736 kB' 'PageTables: 9564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8298420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315596 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.888 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:44.889 nr_hugepages=1024 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:44.889 resv_hugepages=0 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:44.889 surplus_hugepages=0 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:44.889 anon_hugepages=0 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.889 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175607324 kB' 'MemAvailable: 178479324 kB' 'Buffers: 3896 kB' 'Cached: 10139572 kB' 'SwapCached: 0 kB' 'Active: 7166792 kB' 'Inactive: 3507524 kB' 'Active(anon): 6774784 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533968 kB' 'Mapped: 208764 kB' 'Shmem: 6243936 kB' 'KReclaimable: 233836 kB' 'Slab: 811864 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578028 kB' 'KernelStack: 20624 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8298444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:44.890 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.216 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.217 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86143520 kB' 'MemUsed: 11519164 kB' 'SwapCached: 0 kB' 'Active: 4991364 kB' 'Inactive: 3335416 kB' 'Active(anon): 4833824 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8150140 kB' 'Mapped: 72668 kB' 'AnonPages: 179860 kB' 'Shmem: 4657184 kB' 'KernelStack: 10824 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125676 kB' 'Slab: 392812 kB' 'SReclaimable: 125676 kB' 'SUnreclaim: 267136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.218 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:45.219 node0=1024 expecting 1024 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:45.219 00:02:45.219 real 0m3.947s 00:02:45.219 user 0m1.269s 00:02:45.219 sys 0m1.915s 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:45.219 16:44:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:45.219 ************************************ 00:02:45.219 END TEST default_setup 00:02:45.219 ************************************ 00:02:45.219 16:44:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:45.219 16:44:51 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:45.219 16:44:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:45.219 16:44:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:45.219 16:44:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:45.219 ************************************ 00:02:45.219 START TEST per_node_1G_alloc 00:02:45.219 ************************************ 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.219 16:44:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:47.751 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:47.752 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:47.752 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175598196 kB' 'MemAvailable: 178470196 kB' 'Buffers: 3896 kB' 'Cached: 10139660 kB' 'SwapCached: 0 kB' 'Active: 7167716 kB' 'Inactive: 3507524 kB' 'Active(anon): 6775708 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534508 kB' 'Mapped: 208864 kB' 'Shmem: 6244024 kB' 'KReclaimable: 233836 kB' 'Slab: 812424 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578588 kB' 'KernelStack: 20848 kB' 'PageTables: 9972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8298708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315660 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.752 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.753 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175598720 kB' 'MemAvailable: 178470720 kB' 'Buffers: 3896 kB' 'Cached: 10139664 kB' 'SwapCached: 0 kB' 'Active: 7168272 kB' 'Inactive: 3507524 kB' 'Active(anon): 6776264 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535020 kB' 'Mapped: 208864 kB' 'Shmem: 6244028 kB' 'KReclaimable: 233836 kB' 'Slab: 812424 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578588 kB' 'KernelStack: 20832 kB' 'PageTables: 9736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8299228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.754 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:47.755 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175598712 kB' 'MemAvailable: 178470712 kB' 'Buffers: 3896 kB' 'Cached: 10139676 kB' 'SwapCached: 0 kB' 'Active: 7167884 kB' 'Inactive: 3507524 kB' 'Active(anon): 6775876 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535660 kB' 'Mapped: 208776 kB' 'Shmem: 6244040 kB' 'KReclaimable: 233836 kB' 'Slab: 812384 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578548 kB' 'KernelStack: 20784 kB' 'PageTables: 9888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8299252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.756 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:48.022 nr_hugepages=1024 00:02:48.022 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.022 resv_hugepages=0 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.023 surplus_hugepages=0 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.023 anon_hugepages=0 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175602884 kB' 'MemAvailable: 178474884 kB' 'Buffers: 3896 kB' 'Cached: 10139716 kB' 'SwapCached: 0 kB' 'Active: 7166440 kB' 'Inactive: 3507524 kB' 'Active(anon): 6774432 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533648 kB' 'Mapped: 208776 kB' 'Shmem: 6244080 kB' 'KReclaimable: 233836 kB' 'Slab: 812384 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578548 kB' 'KernelStack: 20560 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8296656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315436 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.023 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.024 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87190672 kB' 'MemUsed: 10472012 kB' 'SwapCached: 0 kB' 'Active: 4991580 kB' 'Inactive: 3335416 kB' 'Active(anon): 4834040 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8150264 kB' 'Mapped: 72664 kB' 'AnonPages: 179940 kB' 'Shmem: 4657308 kB' 'KernelStack: 10824 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125676 kB' 'Slab: 393332 kB' 'SReclaimable: 125676 kB' 'SUnreclaim: 267656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.025 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88413984 kB' 'MemUsed: 5304484 kB' 'SwapCached: 0 kB' 'Active: 2174804 kB' 'Inactive: 172108 kB' 'Active(anon): 1940336 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1993372 kB' 'Mapped: 136108 kB' 'AnonPages: 353624 kB' 'Shmem: 1586796 kB' 'KernelStack: 9672 kB' 'PageTables: 4604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108160 kB' 'Slab: 419052 kB' 'SReclaimable: 108160 kB' 'SUnreclaim: 310892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.026 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:48.027 node0=512 expecting 512 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:48.027 node1=512 expecting 512 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:48.027 00:02:48.027 real 0m2.836s 00:02:48.027 user 0m1.135s 00:02:48.027 sys 0m1.745s 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:48.027 16:44:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:48.027 ************************************ 00:02:48.027 END TEST per_node_1G_alloc 00:02:48.027 ************************************ 00:02:48.027 16:44:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:48.027 16:44:54 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:48.027 16:44:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:48.027 16:44:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:48.027 16:44:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:48.027 ************************************ 00:02:48.027 START TEST even_2G_alloc 00:02:48.027 ************************************ 00:02:48.027 16:44:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:48.027 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.028 16:44:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:50.565 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:50.565 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:50.565 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:50.565 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:50.565 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:50.565 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:50.565 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:50.565 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:50.565 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:50.565 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:50.565 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175607744 kB' 'MemAvailable: 178479744 kB' 'Buffers: 3896 kB' 'Cached: 10139820 kB' 'SwapCached: 0 kB' 'Active: 7166744 kB' 'Inactive: 3507524 kB' 'Active(anon): 6774736 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533720 kB' 'Mapped: 207680 kB' 'Shmem: 6244184 kB' 'KReclaimable: 233836 kB' 'Slab: 812456 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578620 kB' 'KernelStack: 21200 kB' 'PageTables: 11048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8286700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315692 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.566 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175608596 kB' 'MemAvailable: 178480596 kB' 'Buffers: 3896 kB' 'Cached: 10139820 kB' 'SwapCached: 0 kB' 'Active: 7166820 kB' 'Inactive: 3507524 kB' 'Active(anon): 6774812 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533808 kB' 'Mapped: 207664 kB' 'Shmem: 6244184 kB' 'KReclaimable: 233836 kB' 'Slab: 812360 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578524 kB' 'KernelStack: 21056 kB' 'PageTables: 10452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8288208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315596 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.567 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.568 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175608040 kB' 'MemAvailable: 178480040 kB' 'Buffers: 3896 kB' 'Cached: 10139840 kB' 'SwapCached: 0 kB' 'Active: 7165680 kB' 'Inactive: 3507524 kB' 'Active(anon): 6773672 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532652 kB' 'Mapped: 207664 kB' 'Shmem: 6244204 kB' 'KReclaimable: 233836 kB' 'Slab: 812584 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578748 kB' 'KernelStack: 20656 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8285612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.569 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.570 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:50.571 nr_hugepages=1024 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:50.571 resv_hugepages=0 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:50.571 surplus_hugepages=0 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:50.571 anon_hugepages=0 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175608300 kB' 'MemAvailable: 178480300 kB' 'Buffers: 3896 kB' 'Cached: 10139864 kB' 'SwapCached: 0 kB' 'Active: 7164596 kB' 'Inactive: 3507524 kB' 'Active(anon): 6772588 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531556 kB' 'Mapped: 207660 kB' 'Shmem: 6244228 kB' 'KReclaimable: 233836 kB' 'Slab: 812488 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578652 kB' 'KernelStack: 20560 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8285636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.571 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:50.572 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87181988 kB' 'MemUsed: 10480696 kB' 'SwapCached: 0 kB' 'Active: 4991772 kB' 'Inactive: 3335416 kB' 'Active(anon): 4834232 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8150400 kB' 'Mapped: 72296 kB' 'AnonPages: 179932 kB' 'Shmem: 4657444 kB' 'KernelStack: 10840 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125676 kB' 'Slab: 393524 kB' 'SReclaimable: 125676 kB' 'SUnreclaim: 267848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.573 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88426320 kB' 'MemUsed: 5292148 kB' 'SwapCached: 0 kB' 'Active: 2173136 kB' 'Inactive: 172108 kB' 'Active(anon): 1938668 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1993380 kB' 'Mapped: 135364 kB' 'AnonPages: 351916 kB' 'Shmem: 1586804 kB' 'KernelStack: 9720 kB' 'PageTables: 4644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108160 kB' 'Slab: 418964 kB' 'SReclaimable: 108160 kB' 'SUnreclaim: 310804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.574 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:50.575 node0=512 expecting 512 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:50.575 node1=512 expecting 512 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:50.575 00:02:50.575 real 0m2.654s 00:02:50.575 user 0m1.020s 00:02:50.575 sys 0m1.633s 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:50.575 16:44:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:50.575 ************************************ 00:02:50.575 END TEST even_2G_alloc 00:02:50.575 ************************************ 00:02:50.835 16:44:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:50.835 16:44:57 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:50.835 16:44:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.835 16:44:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.835 16:44:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:50.835 ************************************ 00:02:50.835 START TEST odd_alloc 00:02:50.835 ************************************ 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.835 16:44:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:53.369 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:53.370 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:53.370 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175596612 kB' 'MemAvailable: 178468612 kB' 'Buffers: 3896 kB' 'Cached: 10139976 kB' 'SwapCached: 0 kB' 'Active: 7167588 kB' 'Inactive: 3507524 kB' 'Active(anon): 6775580 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534948 kB' 'Mapped: 208420 kB' 'Shmem: 6244340 kB' 'KReclaimable: 233836 kB' 'Slab: 811992 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578156 kB' 'KernelStack: 20464 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8291992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.631 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.632 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175593272 kB' 'MemAvailable: 178465272 kB' 'Buffers: 3896 kB' 'Cached: 10139976 kB' 'SwapCached: 0 kB' 'Active: 7171404 kB' 'Inactive: 3507524 kB' 'Active(anon): 6779396 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539296 kB' 'Mapped: 208328 kB' 'Shmem: 6244340 kB' 'KReclaimable: 233836 kB' 'Slab: 812000 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578164 kB' 'KernelStack: 20512 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8297064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.633 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:53.634 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175592516 kB' 'MemAvailable: 178464516 kB' 'Buffers: 3896 kB' 'Cached: 10139976 kB' 'SwapCached: 0 kB' 'Active: 7176644 kB' 'Inactive: 3507524 kB' 'Active(anon): 6784636 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544100 kB' 'Mapped: 208720 kB' 'Shmem: 6244340 kB' 'KReclaimable: 233836 kB' 'Slab: 811992 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578156 kB' 'KernelStack: 20512 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8301188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315472 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.635 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:53.636 nr_hugepages=1025 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:53.636 resv_hugepages=0 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:53.636 surplus_hugepages=0 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:53.636 anon_hugepages=0 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:53.636 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175592516 kB' 'MemAvailable: 178464516 kB' 'Buffers: 3896 kB' 'Cached: 10140016 kB' 'SwapCached: 0 kB' 'Active: 7178508 kB' 'Inactive: 3507524 kB' 'Active(anon): 6786500 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545392 kB' 'Mapped: 208656 kB' 'Shmem: 6244380 kB' 'KReclaimable: 233836 kB' 'Slab: 811948 kB' 'SReclaimable: 233836 kB' 'SUnreclaim: 578112 kB' 'KernelStack: 20544 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8302936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315492 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.637 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87172376 kB' 'MemUsed: 10490308 kB' 'SwapCached: 0 kB' 'Active: 4998580 kB' 'Inactive: 3335416 kB' 'Active(anon): 4841040 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8150480 kB' 'Mapped: 72380 kB' 'AnonPages: 186956 kB' 'Shmem: 4657524 kB' 'KernelStack: 10840 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125676 kB' 'Slab: 393448 kB' 'SReclaimable: 125676 kB' 'SUnreclaim: 267772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.638 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.639 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88422556 kB' 'MemUsed: 5295912 kB' 'SwapCached: 0 kB' 'Active: 2176008 kB' 'Inactive: 172108 kB' 'Active(anon): 1941540 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1993456 kB' 'Mapped: 136220 kB' 'AnonPages: 355012 kB' 'Shmem: 1586880 kB' 'KernelStack: 9752 kB' 'PageTables: 4796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108160 kB' 'Slab: 418500 kB' 'SReclaimable: 108160 kB' 'SUnreclaim: 310340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.640 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:53.641 node0=512 expecting 513 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:53.641 node1=513 expecting 512 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:53.641 00:02:53.641 real 0m2.946s 00:02:53.641 user 0m1.186s 00:02:53.641 sys 0m1.819s 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:53.641 16:45:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:53.641 ************************************ 00:02:53.641 END TEST odd_alloc 00:02:53.641 ************************************ 00:02:53.641 16:45:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:53.641 16:45:00 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:53.641 16:45:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.641 16:45:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.641 16:45:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:53.641 ************************************ 00:02:53.641 START TEST custom_alloc 00:02:53.641 ************************************ 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:53.641 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.900 16:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:56.429 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.429 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.429 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.429 16:45:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174495944 kB' 'MemAvailable: 177367912 kB' 'Buffers: 3896 kB' 'Cached: 10140120 kB' 'SwapCached: 0 kB' 'Active: 7173844 kB' 'Inactive: 3507524 kB' 'Active(anon): 6781836 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540132 kB' 'Mapped: 208628 kB' 'Shmem: 6244484 kB' 'KReclaimable: 233772 kB' 'Slab: 812180 kB' 'SReclaimable: 233772 kB' 'SUnreclaim: 578408 kB' 'KernelStack: 20544 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8295816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315616 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.429 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174495264 kB' 'MemAvailable: 177367232 kB' 'Buffers: 3896 kB' 'Cached: 10140124 kB' 'SwapCached: 0 kB' 'Active: 7173480 kB' 'Inactive: 3507524 kB' 'Active(anon): 6781472 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540264 kB' 'Mapped: 208608 kB' 'Shmem: 6244488 kB' 'KReclaimable: 233772 kB' 'Slab: 812156 kB' 'SReclaimable: 233772 kB' 'SUnreclaim: 578384 kB' 'KernelStack: 20496 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8295836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315584 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.430 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174495436 kB' 'MemAvailable: 177367404 kB' 'Buffers: 3896 kB' 'Cached: 10140140 kB' 'SwapCached: 0 kB' 'Active: 7173892 kB' 'Inactive: 3507524 kB' 'Active(anon): 6781884 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540644 kB' 'Mapped: 208608 kB' 'Shmem: 6244504 kB' 'KReclaimable: 233772 kB' 'Slab: 812236 kB' 'SReclaimable: 233772 kB' 'SUnreclaim: 578464 kB' 'KernelStack: 20544 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8295856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315568 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.431 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:56.432 nr_hugepages=1536 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.432 resv_hugepages=0 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.432 surplus_hugepages=0 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.432 anon_hugepages=0 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174495576 kB' 'MemAvailable: 177367544 kB' 'Buffers: 3896 kB' 'Cached: 10140164 kB' 'SwapCached: 0 kB' 'Active: 7173580 kB' 'Inactive: 3507524 kB' 'Active(anon): 6781572 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540308 kB' 'Mapped: 208608 kB' 'Shmem: 6244528 kB' 'KReclaimable: 233772 kB' 'Slab: 812244 kB' 'SReclaimable: 233772 kB' 'SUnreclaim: 578472 kB' 'KernelStack: 20528 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8295876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315568 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.432 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.433 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.693 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87173496 kB' 'MemUsed: 10489188 kB' 'SwapCached: 0 kB' 'Active: 4994340 kB' 'Inactive: 3335416 kB' 'Active(anon): 4836800 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8150564 kB' 'Mapped: 72356 kB' 'AnonPages: 182452 kB' 'Shmem: 4657608 kB' 'KernelStack: 10856 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125676 kB' 'Slab: 393480 kB' 'SReclaimable: 125676 kB' 'SUnreclaim: 267804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.694 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87324552 kB' 'MemUsed: 6393916 kB' 'SwapCached: 0 kB' 'Active: 2179096 kB' 'Inactive: 172108 kB' 'Active(anon): 1944628 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1993512 kB' 'Mapped: 136252 kB' 'AnonPages: 357696 kB' 'Shmem: 1586936 kB' 'KernelStack: 9656 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108096 kB' 'Slab: 418764 kB' 'SReclaimable: 108096 kB' 'SUnreclaim: 310668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.695 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:56.696 node0=512 expecting 512 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:56.696 node1=1024 expecting 1024 00:02:56.696 16:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:56.696 00:02:56.696 real 0m2.876s 00:02:56.696 user 0m1.196s 00:02:56.697 sys 0m1.744s 00:02:56.697 16:45:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:56.697 16:45:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:56.697 ************************************ 00:02:56.697 END TEST custom_alloc 00:02:56.697 ************************************ 00:02:56.697 16:45:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:56.697 16:45:03 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:56.697 16:45:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:56.697 16:45:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.697 16:45:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:56.697 ************************************ 00:02:56.697 START TEST no_shrink_alloc 00:02:56.697 ************************************ 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.697 16:45:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:59.231 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:59.231 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:59.231 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.231 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175561540 kB' 'MemAvailable: 178433508 kB' 'Buffers: 3896 kB' 'Cached: 10140276 kB' 'SwapCached: 0 kB' 'Active: 7174412 kB' 'Inactive: 3507524 kB' 'Active(anon): 6782404 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541016 kB' 'Mapped: 208792 kB' 'Shmem: 6244640 kB' 'KReclaimable: 233772 kB' 'Slab: 812180 kB' 'SReclaimable: 233772 kB' 'SUnreclaim: 578408 kB' 'KernelStack: 20496 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8296360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315568 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.232 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175562624 kB' 'MemAvailable: 178434592 kB' 'Buffers: 3896 kB' 'Cached: 10140280 kB' 'SwapCached: 0 kB' 'Active: 7174220 kB' 'Inactive: 3507524 kB' 'Active(anon): 6782212 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540808 kB' 'Mapped: 208624 kB' 'Shmem: 6244644 kB' 'KReclaimable: 233772 kB' 'Slab: 812240 kB' 'SReclaimable: 233772 kB' 'SUnreclaim: 578468 kB' 'KernelStack: 20528 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8296376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315536 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.233 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.234 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:59.235 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175562680 kB' 'MemAvailable: 178434648 kB' 'Buffers: 3896 kB' 'Cached: 10140296 kB' 'SwapCached: 0 kB' 'Active: 7174244 kB' 'Inactive: 3507524 kB' 'Active(anon): 6782236 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540812 kB' 'Mapped: 208624 kB' 'Shmem: 6244660 kB' 'KReclaimable: 233772 kB' 'Slab: 812240 kB' 'SReclaimable: 233772 kB' 'SUnreclaim: 578468 kB' 'KernelStack: 20528 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8296400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315520 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:59.499 nr_hugepages=1024 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:59.499 resv_hugepages=0 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:59.499 surplus_hugepages=0 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:59.499 anon_hugepages=0 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175563852 kB' 'MemAvailable: 178435820 kB' 'Buffers: 3896 kB' 'Cached: 10140316 kB' 'SwapCached: 0 kB' 'Active: 7174276 kB' 'Inactive: 3507524 kB' 'Active(anon): 6782268 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540816 kB' 'Mapped: 208624 kB' 'Shmem: 6244680 kB' 'KReclaimable: 233772 kB' 'Slab: 812240 kB' 'SReclaimable: 233772 kB' 'SUnreclaim: 578468 kB' 'KernelStack: 20528 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8296424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315520 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.499 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.500 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86113068 kB' 'MemUsed: 11549616 kB' 'SwapCached: 0 kB' 'Active: 4994544 kB' 'Inactive: 3335416 kB' 'Active(anon): 4837004 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8150652 kB' 'Mapped: 72356 kB' 'AnonPages: 182488 kB' 'Shmem: 4657696 kB' 'KernelStack: 10840 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125676 kB' 'Slab: 393532 kB' 'SReclaimable: 125676 kB' 'SUnreclaim: 267856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.501 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:59.502 node0=1024 expecting 1024 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.502 16:45:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.034 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:02.034 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:02.034 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:02.034 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175570728 kB' 'MemAvailable: 178442696 kB' 'Buffers: 3896 kB' 'Cached: 10140400 kB' 'SwapCached: 0 kB' 'Active: 7176004 kB' 'Inactive: 3507524 kB' 'Active(anon): 6783996 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542524 kB' 'Mapped: 208376 kB' 'Shmem: 6244764 kB' 'KReclaimable: 233772 kB' 'Slab: 811684 kB' 'SReclaimable: 233772 kB' 'SUnreclaim: 577912 kB' 'KernelStack: 20544 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8297356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315568 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.300 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.301 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175570880 kB' 'MemAvailable: 178442848 kB' 'Buffers: 3896 kB' 'Cached: 10140404 kB' 'SwapCached: 0 kB' 'Active: 7174700 kB' 'Inactive: 3507524 kB' 'Active(anon): 6782692 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541180 kB' 'Mapped: 208572 kB' 'Shmem: 6244768 kB' 'KReclaimable: 233772 kB' 'Slab: 811632 kB' 'SReclaimable: 233772 kB' 'SUnreclaim: 577860 kB' 'KernelStack: 20480 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8297376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315568 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.302 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175570732 kB' 'MemAvailable: 178442700 kB' 'Buffers: 3896 kB' 'Cached: 10140440 kB' 'SwapCached: 0 kB' 'Active: 7175580 kB' 'Inactive: 3507524 kB' 'Active(anon): 6783572 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541992 kB' 'Mapped: 208572 kB' 'Shmem: 6244804 kB' 'KReclaimable: 233772 kB' 'Slab: 811632 kB' 'SReclaimable: 233772 kB' 'SUnreclaim: 577860 kB' 'KernelStack: 20544 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8297904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315584 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.305 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:02.306 nr_hugepages=1024 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:02.306 resv_hugepages=0 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:02.306 surplus_hugepages=0 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:02.306 anon_hugepages=0 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175570732 kB' 'MemAvailable: 178442700 kB' 'Buffers: 3896 kB' 'Cached: 10140456 kB' 'SwapCached: 0 kB' 'Active: 7175840 kB' 'Inactive: 3507524 kB' 'Active(anon): 6783832 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542240 kB' 'Mapped: 208572 kB' 'Shmem: 6244820 kB' 'KReclaimable: 233772 kB' 'Slab: 811632 kB' 'SReclaimable: 233772 kB' 'SUnreclaim: 577860 kB' 'KernelStack: 20528 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8297924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315584 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2964436 kB' 'DirectMap2M: 15589376 kB' 'DirectMap1G: 183500800 kB' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.306 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.307 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86121972 kB' 'MemUsed: 11540712 kB' 'SwapCached: 0 kB' 'Active: 5000492 kB' 'Inactive: 3335416 kB' 'Active(anon): 4842952 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8150704 kB' 'Mapped: 72356 kB' 'AnonPages: 188388 kB' 'Shmem: 4657748 kB' 'KernelStack: 10872 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125676 kB' 'Slab: 393252 kB' 'SReclaimable: 125676 kB' 'SUnreclaim: 267576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.308 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:02.309 node0=1024 expecting 1024 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:02.309 00:03:02.309 real 0m5.680s 00:03:02.309 user 0m2.290s 00:03:02.309 sys 0m3.499s 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:02.309 16:45:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:02.309 ************************************ 00:03:02.309 END TEST no_shrink_alloc 00:03:02.309 ************************************ 00:03:02.309 16:45:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:02.309 16:45:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:02.309 00:03:02.309 real 0m21.439s 00:03:02.309 user 0m8.313s 00:03:02.309 sys 0m12.670s 00:03:02.309 16:45:08 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:02.309 16:45:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:02.309 ************************************ 00:03:02.309 END TEST hugepages 00:03:02.309 ************************************ 00:03:02.599 16:45:08 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:02.599 16:45:08 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:02.599 16:45:08 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.599 16:45:08 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.599 16:45:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:02.599 ************************************ 00:03:02.599 START TEST driver 00:03:02.599 ************************************ 00:03:02.599 16:45:09 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:02.599 * Looking for test storage... 00:03:02.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:02.599 16:45:09 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:02.599 16:45:09 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.599 16:45:09 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.903 16:45:12 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:05.903 16:45:12 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.903 16:45:12 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.903 16:45:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:05.903 ************************************ 00:03:05.903 START TEST guess_driver 00:03:05.903 ************************************ 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:05.903 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:06.162 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:06.162 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:06.162 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:06.162 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:06.162 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:06.162 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:06.162 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:06.162 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:06.162 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:06.162 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:06.162 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:06.162 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:06.162 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:06.162 Looking for driver=vfio-pci 00:03:06.162 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:06.162 16:45:12 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:06.162 16:45:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.162 16:45:12 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.696 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.954 16:45:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.889 16:45:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.889 16:45:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.889 16:45:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.889 16:45:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:09.889 16:45:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:09.889 16:45:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.890 16:45:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.088 00:03:14.088 real 0m7.702s 00:03:14.088 user 0m2.259s 00:03:14.088 sys 0m3.959s 00:03:14.088 16:45:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.088 16:45:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:14.088 ************************************ 00:03:14.088 END TEST guess_driver 00:03:14.088 ************************************ 00:03:14.088 16:45:20 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:14.088 00:03:14.088 real 0m11.291s 00:03:14.088 user 0m3.166s 00:03:14.088 sys 0m5.785s 00:03:14.088 16:45:20 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.088 16:45:20 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:14.088 ************************************ 00:03:14.088 END TEST driver 00:03:14.088 ************************************ 00:03:14.088 16:45:20 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:14.088 16:45:20 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:14.088 16:45:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.088 16:45:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.088 16:45:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:14.088 ************************************ 00:03:14.088 START TEST devices 00:03:14.088 ************************************ 00:03:14.088 16:45:20 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:14.088 * Looking for test storage... 00:03:14.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:14.088 16:45:20 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:14.088 16:45:20 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:14.088 16:45:20 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.088 16:45:20 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:17.380 16:45:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:17.380 16:45:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:17.380 16:45:23 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:17.380 16:45:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:17.380 16:45:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:17.380 16:45:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:17.380 16:45:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:17.380 16:45:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:17.380 16:45:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:17.380 16:45:23 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:17.380 No valid GPT data, bailing 00:03:17.380 16:45:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:17.380 16:45:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:17.380 16:45:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:17.380 16:45:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:17.380 16:45:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:17.380 16:45:23 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:17.380 16:45:23 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:17.380 16:45:23 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.380 16:45:23 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.380 16:45:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:17.380 ************************************ 00:03:17.380 START TEST nvme_mount 00:03:17.380 ************************************ 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:17.380 16:45:23 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:17.949 Creating new GPT entries in memory. 00:03:17.949 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:17.949 other utilities. 00:03:17.949 16:45:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:17.949 16:45:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:17.949 16:45:24 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:17.949 16:45:24 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:17.949 16:45:24 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:19.329 Creating new GPT entries in memory. 00:03:19.329 The operation has completed successfully. 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 4074939 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.329 16:45:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:21.868 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.868 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:21.868 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:21.868 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.868 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.868 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.868 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.868 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:21.869 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:21.869 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:22.128 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:22.128 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:22.128 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:22.128 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:22.128 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:22.128 16:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:22.128 16:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.128 16:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:22.128 16:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:22.128 16:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.388 16:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.921 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.922 16:45:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:27.503 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:27.503 00:03:27.503 real 0m10.328s 00:03:27.503 user 0m2.972s 00:03:27.503 sys 0m5.124s 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.503 16:45:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:27.503 ************************************ 00:03:27.503 END TEST nvme_mount 00:03:27.503 ************************************ 00:03:27.503 16:45:33 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:27.503 16:45:33 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:27.503 16:45:33 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.503 16:45:33 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.503 16:45:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:27.503 ************************************ 00:03:27.503 START TEST dm_mount 00:03:27.503 ************************************ 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:27.503 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:27.504 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:27.504 16:45:33 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:28.443 Creating new GPT entries in memory. 00:03:28.443 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:28.443 other utilities. 00:03:28.443 16:45:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:28.443 16:45:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:28.443 16:45:34 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:28.443 16:45:34 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:28.443 16:45:34 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:29.381 Creating new GPT entries in memory. 00:03:29.381 The operation has completed successfully. 00:03:29.381 16:45:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:29.381 16:45:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:29.381 16:45:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:29.381 16:45:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:29.381 16:45:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:30.761 The operation has completed successfully. 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 4078909 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.761 16:45:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.301 16:45:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.837 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:35.838 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:35.838 00:03:35.838 real 0m8.493s 00:03:35.838 user 0m2.018s 00:03:35.838 sys 0m3.405s 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.838 16:45:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:35.838 ************************************ 00:03:35.838 END TEST dm_mount 00:03:35.838 ************************************ 00:03:35.838 16:45:42 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:35.838 16:45:42 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:35.838 16:45:42 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:35.838 16:45:42 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.838 16:45:42 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:35.838 16:45:42 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:36.097 16:45:42 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:36.097 16:45:42 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:36.356 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:36.356 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:36.356 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:36.356 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:36.356 16:45:42 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:36.356 16:45:42 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.356 16:45:42 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:36.356 16:45:42 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:36.356 16:45:42 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:36.356 16:45:42 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:36.356 16:45:42 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:36.356 00:03:36.356 real 0m22.419s 00:03:36.356 user 0m6.261s 00:03:36.356 sys 0m10.729s 00:03:36.356 16:45:42 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.356 16:45:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:36.356 ************************************ 00:03:36.356 END TEST devices 00:03:36.356 ************************************ 00:03:36.356 16:45:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:36.356 00:03:36.356 real 1m14.444s 00:03:36.356 user 0m24.042s 00:03:36.356 sys 0m40.543s 00:03:36.356 16:45:42 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.356 16:45:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:36.356 ************************************ 00:03:36.356 END TEST setup.sh 00:03:36.356 ************************************ 00:03:36.356 16:45:42 -- common/autotest_common.sh@1142 -- # return 0 00:03:36.356 16:45:42 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:38.891 Hugepages 00:03:38.891 node hugesize free / total 00:03:38.891 node0 1048576kB 0 / 0 00:03:38.891 node0 2048kB 2048 / 2048 00:03:38.891 node1 1048576kB 0 / 0 00:03:38.891 node1 2048kB 0 / 0 00:03:38.891 00:03:38.891 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.891 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:38.891 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:38.891 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:38.891 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:38.891 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:38.891 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:38.891 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:38.891 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:38.891 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:38.891 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:38.891 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:38.891 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:38.891 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:38.891 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:38.891 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:38.891 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:38.891 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:38.891 16:45:45 -- spdk/autotest.sh@130 -- # uname -s 00:03:39.151 16:45:45 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:39.151 16:45:45 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:39.151 16:45:45 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.706 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.706 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:42.641 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.641 16:45:49 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:43.576 16:45:50 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:43.576 16:45:50 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:43.576 16:45:50 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:43.576 16:45:50 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:43.576 16:45:50 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:43.576 16:45:50 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:43.576 16:45:50 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:43.576 16:45:50 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:43.576 16:45:50 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:43.576 16:45:50 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:43.576 16:45:50 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:03:43.576 16:45:50 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.106 Waiting for block devices as requested 00:03:46.106 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:46.106 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:46.106 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:46.106 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:46.106 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:46.106 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:46.106 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:46.365 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:46.365 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:46.365 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:46.667 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:46.667 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:46.667 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:46.667 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:46.973 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:46.973 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:46.973 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:46.973 16:45:53 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:46.973 16:45:53 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:46.973 16:45:53 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:46.973 16:45:53 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:03:46.973 16:45:53 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:46.973 16:45:53 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:46.973 16:45:53 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:46.973 16:45:53 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:46.973 16:45:53 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:46.973 16:45:53 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:46.973 16:45:53 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:46.973 16:45:53 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:46.973 16:45:53 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:46.973 16:45:53 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:03:46.973 16:45:53 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:46.973 16:45:53 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:46.973 16:45:53 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:46.973 16:45:53 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:46.973 16:45:53 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:46.973 16:45:53 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:46.973 16:45:53 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:46.973 16:45:53 -- common/autotest_common.sh@1557 -- # continue 00:03:46.973 16:45:53 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:46.973 16:45:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:46.973 16:45:53 -- common/autotest_common.sh@10 -- # set +x 00:03:47.232 16:45:53 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:47.232 16:45:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.232 16:45:53 -- common/autotest_common.sh@10 -- # set +x 00:03:47.232 16:45:53 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.137 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:49.137 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:49.137 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:49.137 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:49.137 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:49.137 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:49.396 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:49.396 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:49.397 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:49.397 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:49.397 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:49.397 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:49.397 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:49.397 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:49.397 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:49.397 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:50.333 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:50.333 16:45:56 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:50.333 16:45:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:50.333 16:45:56 -- common/autotest_common.sh@10 -- # set +x 00:03:50.333 16:45:56 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:50.333 16:45:56 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:50.333 16:45:56 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:50.333 16:45:56 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:50.333 16:45:56 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:50.333 16:45:56 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:50.333 16:45:56 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:50.333 16:45:56 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:50.333 16:45:56 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:50.333 16:45:56 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:50.333 16:45:56 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:50.333 16:45:56 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:50.333 16:45:56 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:03:50.333 16:45:56 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:50.333 16:45:56 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:50.333 16:45:56 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:50.333 16:45:56 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:50.333 16:45:56 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:50.333 16:45:56 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:03:50.333 16:45:56 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:03:50.333 16:45:56 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=4087696 00:03:50.333 16:45:56 -- common/autotest_common.sh@1598 -- # waitforlisten 4087696 00:03:50.333 16:45:56 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:50.333 16:45:56 -- common/autotest_common.sh@829 -- # '[' -z 4087696 ']' 00:03:50.333 16:45:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.333 16:45:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:50.333 16:45:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.333 16:45:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:50.333 16:45:56 -- common/autotest_common.sh@10 -- # set +x 00:03:50.592 [2024-07-15 16:45:57.032020] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:03:50.592 [2024-07-15 16:45:57.032060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4087696 ] 00:03:50.592 EAL: No free 2048 kB hugepages reported on node 1 00:03:50.592 [2024-07-15 16:45:57.086948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.592 [2024-07-15 16:45:57.159782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.159 16:45:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:51.159 16:45:57 -- common/autotest_common.sh@862 -- # return 0 00:03:51.159 16:45:57 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:51.159 16:45:57 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:51.159 16:45:57 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:54.447 nvme0n1 00:03:54.447 16:46:00 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:54.447 [2024-07-15 16:46:00.976317] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:54.447 request: 00:03:54.447 { 00:03:54.447 "nvme_ctrlr_name": "nvme0", 00:03:54.447 "password": "test", 00:03:54.447 "method": "bdev_nvme_opal_revert", 00:03:54.447 "req_id": 1 00:03:54.448 } 00:03:54.448 Got JSON-RPC error response 00:03:54.448 response: 00:03:54.448 { 00:03:54.448 "code": -32602, 00:03:54.448 "message": "Invalid parameters" 00:03:54.448 } 00:03:54.448 16:46:00 -- common/autotest_common.sh@1604 -- # true 00:03:54.448 16:46:00 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:54.448 16:46:00 -- common/autotest_common.sh@1608 -- # killprocess 4087696 00:03:54.448 16:46:00 -- common/autotest_common.sh@948 -- # '[' -z 4087696 ']' 00:03:54.448 16:46:00 -- common/autotest_common.sh@952 -- # kill -0 4087696 00:03:54.448 16:46:00 -- common/autotest_common.sh@953 -- # uname 00:03:54.448 16:46:01 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:54.448 16:46:01 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4087696 00:03:54.448 16:46:01 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:54.448 16:46:01 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:54.448 16:46:01 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4087696' 00:03:54.448 killing process with pid 4087696 00:03:54.448 16:46:01 -- common/autotest_common.sh@967 -- # kill 4087696 00:03:54.448 16:46:01 -- common/autotest_common.sh@972 -- # wait 4087696 00:03:56.352 16:46:02 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:56.352 16:46:02 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:56.352 16:46:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:56.352 16:46:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:56.352 16:46:02 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:56.352 16:46:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:56.352 16:46:02 -- common/autotest_common.sh@10 -- # set +x 00:03:56.352 16:46:02 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:56.352 16:46:02 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:56.352 16:46:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.352 16:46:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.352 16:46:02 -- common/autotest_common.sh@10 -- # set +x 00:03:56.352 ************************************ 00:03:56.352 START TEST env 00:03:56.352 ************************************ 00:03:56.352 16:46:02 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:56.352 * Looking for test storage... 00:03:56.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:56.352 16:46:02 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.352 16:46:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.352 16:46:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.352 16:46:02 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.352 ************************************ 00:03:56.352 START TEST env_memory 00:03:56.352 ************************************ 00:03:56.352 16:46:02 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.352 00:03:56.352 00:03:56.352 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.352 http://cunit.sourceforge.net/ 00:03:56.352 00:03:56.352 00:03:56.352 Suite: memory 00:03:56.352 Test: alloc and free memory map ...[2024-07-15 16:46:02.827506] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:56.352 passed 00:03:56.352 Test: mem map translation ...[2024-07-15 16:46:02.846220] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:56.352 [2024-07-15 16:46:02.846238] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:56.352 [2024-07-15 16:46:02.846288] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:56.352 [2024-07-15 16:46:02.846295] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:56.352 passed 00:03:56.352 Test: mem map registration ...[2024-07-15 16:46:02.883503] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:56.352 [2024-07-15 16:46:02.883520] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:56.352 passed 00:03:56.352 Test: mem map adjacent registrations ...passed 00:03:56.352 00:03:56.352 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.352 suites 1 1 n/a 0 0 00:03:56.352 tests 4 4 4 0 0 00:03:56.352 asserts 152 152 152 0 n/a 00:03:56.352 00:03:56.352 Elapsed time = 0.137 seconds 00:03:56.352 00:03:56.352 real 0m0.150s 00:03:56.352 user 0m0.142s 00:03:56.352 sys 0m0.007s 00:03:56.352 16:46:02 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.352 16:46:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:56.352 ************************************ 00:03:56.352 END TEST env_memory 00:03:56.352 ************************************ 00:03:56.352 16:46:02 env -- common/autotest_common.sh@1142 -- # return 0 00:03:56.352 16:46:02 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.352 16:46:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.352 16:46:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.352 16:46:02 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.352 ************************************ 00:03:56.352 START TEST env_vtophys 00:03:56.352 ************************************ 00:03:56.352 16:46:02 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.352 EAL: lib.eal log level changed from notice to debug 00:03:56.352 EAL: Detected lcore 0 as core 0 on socket 0 00:03:56.352 EAL: Detected lcore 1 as core 1 on socket 0 00:03:56.352 EAL: Detected lcore 2 as core 2 on socket 0 00:03:56.352 EAL: Detected lcore 3 as core 3 on socket 0 00:03:56.352 EAL: Detected lcore 4 as core 4 on socket 0 00:03:56.352 EAL: Detected lcore 5 as core 5 on socket 0 00:03:56.352 EAL: Detected lcore 6 as core 6 on socket 0 00:03:56.352 EAL: Detected lcore 7 as core 8 on socket 0 00:03:56.352 EAL: Detected lcore 8 as core 9 on socket 0 00:03:56.352 EAL: Detected lcore 9 as core 10 on socket 0 00:03:56.352 EAL: Detected lcore 10 as core 11 on socket 0 00:03:56.352 EAL: Detected lcore 11 as core 12 on socket 0 00:03:56.352 EAL: Detected lcore 12 as core 13 on socket 0 00:03:56.352 EAL: Detected lcore 13 as core 16 on socket 0 00:03:56.352 EAL: Detected lcore 14 as core 17 on socket 0 00:03:56.352 EAL: Detected lcore 15 as core 18 on socket 0 00:03:56.352 EAL: Detected lcore 16 as core 19 on socket 0 00:03:56.352 EAL: Detected lcore 17 as core 20 on socket 0 00:03:56.352 EAL: Detected lcore 18 as core 21 on socket 0 00:03:56.352 EAL: Detected lcore 19 as core 25 on socket 0 00:03:56.352 EAL: Detected lcore 20 as core 26 on socket 0 00:03:56.352 EAL: Detected lcore 21 as core 27 on socket 0 00:03:56.353 EAL: Detected lcore 22 as core 28 on socket 0 00:03:56.353 EAL: Detected lcore 23 as core 29 on socket 0 00:03:56.353 EAL: Detected lcore 24 as core 0 on socket 1 00:03:56.353 EAL: Detected lcore 25 as core 1 on socket 1 00:03:56.353 EAL: Detected lcore 26 as core 2 on socket 1 00:03:56.353 EAL: Detected lcore 27 as core 3 on socket 1 00:03:56.353 EAL: Detected lcore 28 as core 4 on socket 1 00:03:56.353 EAL: Detected lcore 29 as core 5 on socket 1 00:03:56.353 EAL: Detected lcore 30 as core 6 on socket 1 00:03:56.353 EAL: Detected lcore 31 as core 9 on socket 1 00:03:56.353 EAL: Detected lcore 32 as core 10 on socket 1 00:03:56.353 EAL: Detected lcore 33 as core 11 on socket 1 00:03:56.353 EAL: Detected lcore 34 as core 12 on socket 1 00:03:56.353 EAL: Detected lcore 35 as core 13 on socket 1 00:03:56.353 EAL: Detected lcore 36 as core 16 on socket 1 00:03:56.353 EAL: Detected lcore 37 as core 17 on socket 1 00:03:56.353 EAL: Detected lcore 38 as core 18 on socket 1 00:03:56.353 EAL: Detected lcore 39 as core 19 on socket 1 00:03:56.353 EAL: Detected lcore 40 as core 20 on socket 1 00:03:56.353 EAL: Detected lcore 41 as core 21 on socket 1 00:03:56.353 EAL: Detected lcore 42 as core 24 on socket 1 00:03:56.353 EAL: Detected lcore 43 as core 25 on socket 1 00:03:56.353 EAL: Detected lcore 44 as core 26 on socket 1 00:03:56.353 EAL: Detected lcore 45 as core 27 on socket 1 00:03:56.353 EAL: Detected lcore 46 as core 28 on socket 1 00:03:56.353 EAL: Detected lcore 47 as core 29 on socket 1 00:03:56.353 EAL: Detected lcore 48 as core 0 on socket 0 00:03:56.353 EAL: Detected lcore 49 as core 1 on socket 0 00:03:56.353 EAL: Detected lcore 50 as core 2 on socket 0 00:03:56.353 EAL: Detected lcore 51 as core 3 on socket 0 00:03:56.353 EAL: Detected lcore 52 as core 4 on socket 0 00:03:56.353 EAL: Detected lcore 53 as core 5 on socket 0 00:03:56.353 EAL: Detected lcore 54 as core 6 on socket 0 00:03:56.353 EAL: Detected lcore 55 as core 8 on socket 0 00:03:56.353 EAL: Detected lcore 56 as core 9 on socket 0 00:03:56.353 EAL: Detected lcore 57 as core 10 on socket 0 00:03:56.353 EAL: Detected lcore 58 as core 11 on socket 0 00:03:56.353 EAL: Detected lcore 59 as core 12 on socket 0 00:03:56.353 EAL: Detected lcore 60 as core 13 on socket 0 00:03:56.353 EAL: Detected lcore 61 as core 16 on socket 0 00:03:56.353 EAL: Detected lcore 62 as core 17 on socket 0 00:03:56.353 EAL: Detected lcore 63 as core 18 on socket 0 00:03:56.353 EAL: Detected lcore 64 as core 19 on socket 0 00:03:56.353 EAL: Detected lcore 65 as core 20 on socket 0 00:03:56.353 EAL: Detected lcore 66 as core 21 on socket 0 00:03:56.353 EAL: Detected lcore 67 as core 25 on socket 0 00:03:56.353 EAL: Detected lcore 68 as core 26 on socket 0 00:03:56.353 EAL: Detected lcore 69 as core 27 on socket 0 00:03:56.353 EAL: Detected lcore 70 as core 28 on socket 0 00:03:56.353 EAL: Detected lcore 71 as core 29 on socket 0 00:03:56.353 EAL: Detected lcore 72 as core 0 on socket 1 00:03:56.353 EAL: Detected lcore 73 as core 1 on socket 1 00:03:56.353 EAL: Detected lcore 74 as core 2 on socket 1 00:03:56.353 EAL: Detected lcore 75 as core 3 on socket 1 00:03:56.353 EAL: Detected lcore 76 as core 4 on socket 1 00:03:56.353 EAL: Detected lcore 77 as core 5 on socket 1 00:03:56.353 EAL: Detected lcore 78 as core 6 on socket 1 00:03:56.353 EAL: Detected lcore 79 as core 9 on socket 1 00:03:56.353 EAL: Detected lcore 80 as core 10 on socket 1 00:03:56.353 EAL: Detected lcore 81 as core 11 on socket 1 00:03:56.353 EAL: Detected lcore 82 as core 12 on socket 1 00:03:56.353 EAL: Detected lcore 83 as core 13 on socket 1 00:03:56.353 EAL: Detected lcore 84 as core 16 on socket 1 00:03:56.353 EAL: Detected lcore 85 as core 17 on socket 1 00:03:56.353 EAL: Detected lcore 86 as core 18 on socket 1 00:03:56.353 EAL: Detected lcore 87 as core 19 on socket 1 00:03:56.353 EAL: Detected lcore 88 as core 20 on socket 1 00:03:56.353 EAL: Detected lcore 89 as core 21 on socket 1 00:03:56.353 EAL: Detected lcore 90 as core 24 on socket 1 00:03:56.353 EAL: Detected lcore 91 as core 25 on socket 1 00:03:56.353 EAL: Detected lcore 92 as core 26 on socket 1 00:03:56.353 EAL: Detected lcore 93 as core 27 on socket 1 00:03:56.353 EAL: Detected lcore 94 as core 28 on socket 1 00:03:56.353 EAL: Detected lcore 95 as core 29 on socket 1 00:03:56.611 EAL: Maximum logical cores by configuration: 128 00:03:56.611 EAL: Detected CPU lcores: 96 00:03:56.611 EAL: Detected NUMA nodes: 2 00:03:56.611 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:56.611 EAL: Detected shared linkage of DPDK 00:03:56.611 EAL: No shared files mode enabled, IPC will be disabled 00:03:56.611 EAL: Bus pci wants IOVA as 'DC' 00:03:56.611 EAL: Buses did not request a specific IOVA mode. 00:03:56.611 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:56.611 EAL: Selected IOVA mode 'VA' 00:03:56.611 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.611 EAL: Probing VFIO support... 00:03:56.611 EAL: IOMMU type 1 (Type 1) is supported 00:03:56.611 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:56.611 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:56.611 EAL: VFIO support initialized 00:03:56.611 EAL: Ask a virtual area of 0x2e000 bytes 00:03:56.611 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:56.611 EAL: Setting up physically contiguous memory... 00:03:56.611 EAL: Setting maximum number of open files to 524288 00:03:56.611 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:56.611 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:56.611 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:56.611 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.611 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:56.611 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.611 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.611 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:56.611 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:56.611 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.611 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:56.611 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.611 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.611 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:56.611 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:56.611 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.611 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:56.611 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.611 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.611 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:56.611 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:56.611 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.611 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:56.611 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.611 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.611 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:56.611 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:56.611 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:56.611 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.611 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:56.611 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.611 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.611 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:56.611 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:56.611 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.611 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:56.611 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.611 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.611 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:56.611 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:56.611 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.611 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:56.611 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.611 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.611 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:56.611 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:56.611 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.611 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:56.611 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.611 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.611 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:56.611 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:56.611 EAL: Hugepages will be freed exactly as allocated. 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: TSC frequency is ~2300000 KHz 00:03:56.611 EAL: Main lcore 0 is ready (tid=7f062a05fa00;cpuset=[0]) 00:03:56.611 EAL: Trying to obtain current memory policy. 00:03:56.611 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.611 EAL: Restoring previous memory policy: 0 00:03:56.611 EAL: request: mp_malloc_sync 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: Heap on socket 0 was expanded by 2MB 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:56.611 EAL: Mem event callback 'spdk:(nil)' registered 00:03:56.611 00:03:56.611 00:03:56.611 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.611 http://cunit.sourceforge.net/ 00:03:56.611 00:03:56.611 00:03:56.611 Suite: components_suite 00:03:56.611 Test: vtophys_malloc_test ...passed 00:03:56.611 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:56.611 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.611 EAL: Restoring previous memory policy: 4 00:03:56.611 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.611 EAL: request: mp_malloc_sync 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: Heap on socket 0 was expanded by 4MB 00:03:56.611 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.611 EAL: request: mp_malloc_sync 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: Heap on socket 0 was shrunk by 4MB 00:03:56.611 EAL: Trying to obtain current memory policy. 00:03:56.611 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.611 EAL: Restoring previous memory policy: 4 00:03:56.611 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.611 EAL: request: mp_malloc_sync 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: Heap on socket 0 was expanded by 6MB 00:03:56.611 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.611 EAL: request: mp_malloc_sync 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: Heap on socket 0 was shrunk by 6MB 00:03:56.611 EAL: Trying to obtain current memory policy. 00:03:56.611 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.611 EAL: Restoring previous memory policy: 4 00:03:56.611 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.611 EAL: request: mp_malloc_sync 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: Heap on socket 0 was expanded by 10MB 00:03:56.611 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.611 EAL: request: mp_malloc_sync 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: Heap on socket 0 was shrunk by 10MB 00:03:56.611 EAL: Trying to obtain current memory policy. 00:03:56.611 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.611 EAL: Restoring previous memory policy: 4 00:03:56.611 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.611 EAL: request: mp_malloc_sync 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: Heap on socket 0 was expanded by 18MB 00:03:56.611 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.611 EAL: request: mp_malloc_sync 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: Heap on socket 0 was shrunk by 18MB 00:03:56.611 EAL: Trying to obtain current memory policy. 00:03:56.611 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.611 EAL: Restoring previous memory policy: 4 00:03:56.611 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.611 EAL: request: mp_malloc_sync 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: Heap on socket 0 was expanded by 34MB 00:03:56.611 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.611 EAL: request: mp_malloc_sync 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: Heap on socket 0 was shrunk by 34MB 00:03:56.611 EAL: Trying to obtain current memory policy. 00:03:56.611 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.611 EAL: Restoring previous memory policy: 4 00:03:56.611 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.611 EAL: request: mp_malloc_sync 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: Heap on socket 0 was expanded by 66MB 00:03:56.611 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.611 EAL: request: mp_malloc_sync 00:03:56.611 EAL: No shared files mode enabled, IPC is disabled 00:03:56.611 EAL: Heap on socket 0 was shrunk by 66MB 00:03:56.611 EAL: Trying to obtain current memory policy. 00:03:56.611 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.611 EAL: Restoring previous memory policy: 4 00:03:56.612 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.612 EAL: request: mp_malloc_sync 00:03:56.612 EAL: No shared files mode enabled, IPC is disabled 00:03:56.612 EAL: Heap on socket 0 was expanded by 130MB 00:03:56.612 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.612 EAL: request: mp_malloc_sync 00:03:56.612 EAL: No shared files mode enabled, IPC is disabled 00:03:56.612 EAL: Heap on socket 0 was shrunk by 130MB 00:03:56.612 EAL: Trying to obtain current memory policy. 00:03:56.612 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.612 EAL: Restoring previous memory policy: 4 00:03:56.612 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.612 EAL: request: mp_malloc_sync 00:03:56.612 EAL: No shared files mode enabled, IPC is disabled 00:03:56.612 EAL: Heap on socket 0 was expanded by 258MB 00:03:56.612 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.869 EAL: request: mp_malloc_sync 00:03:56.869 EAL: No shared files mode enabled, IPC is disabled 00:03:56.869 EAL: Heap on socket 0 was shrunk by 258MB 00:03:56.869 EAL: Trying to obtain current memory policy. 00:03:56.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.869 EAL: Restoring previous memory policy: 4 00:03:56.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.869 EAL: request: mp_malloc_sync 00:03:56.869 EAL: No shared files mode enabled, IPC is disabled 00:03:56.869 EAL: Heap on socket 0 was expanded by 514MB 00:03:56.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.127 EAL: request: mp_malloc_sync 00:03:57.127 EAL: No shared files mode enabled, IPC is disabled 00:03:57.127 EAL: Heap on socket 0 was shrunk by 514MB 00:03:57.127 EAL: Trying to obtain current memory policy. 00:03:57.127 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.127 EAL: Restoring previous memory policy: 4 00:03:57.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.127 EAL: request: mp_malloc_sync 00:03:57.127 EAL: No shared files mode enabled, IPC is disabled 00:03:57.127 EAL: Heap on socket 0 was expanded by 1026MB 00:03:57.385 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.643 EAL: request: mp_malloc_sync 00:03:57.643 EAL: No shared files mode enabled, IPC is disabled 00:03:57.643 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:57.643 passed 00:03:57.643 00:03:57.643 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.643 suites 1 1 n/a 0 0 00:03:57.643 tests 2 2 2 0 0 00:03:57.643 asserts 497 497 497 0 n/a 00:03:57.643 00:03:57.643 Elapsed time = 0.966 seconds 00:03:57.643 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.643 EAL: request: mp_malloc_sync 00:03:57.643 EAL: No shared files mode enabled, IPC is disabled 00:03:57.643 EAL: Heap on socket 0 was shrunk by 2MB 00:03:57.643 EAL: No shared files mode enabled, IPC is disabled 00:03:57.643 EAL: No shared files mode enabled, IPC is disabled 00:03:57.643 EAL: No shared files mode enabled, IPC is disabled 00:03:57.643 00:03:57.643 real 0m1.083s 00:03:57.643 user 0m0.636s 00:03:57.643 sys 0m0.413s 00:03:57.643 16:46:04 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.643 16:46:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:57.643 ************************************ 00:03:57.643 END TEST env_vtophys 00:03:57.643 ************************************ 00:03:57.643 16:46:04 env -- common/autotest_common.sh@1142 -- # return 0 00:03:57.643 16:46:04 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:57.643 16:46:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.643 16:46:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.643 16:46:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.643 ************************************ 00:03:57.643 START TEST env_pci 00:03:57.643 ************************************ 00:03:57.643 16:46:04 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:57.643 00:03:57.643 00:03:57.643 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.643 http://cunit.sourceforge.net/ 00:03:57.643 00:03:57.643 00:03:57.643 Suite: pci 00:03:57.643 Test: pci_hook ...[2024-07-15 16:46:04.160471] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4088999 has claimed it 00:03:57.643 EAL: Cannot find device (10000:00:01.0) 00:03:57.643 EAL: Failed to attach device on primary process 00:03:57.643 passed 00:03:57.643 00:03:57.643 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.643 suites 1 1 n/a 0 0 00:03:57.643 tests 1 1 1 0 0 00:03:57.643 asserts 25 25 25 0 n/a 00:03:57.643 00:03:57.643 Elapsed time = 0.028 seconds 00:03:57.643 00:03:57.643 real 0m0.047s 00:03:57.643 user 0m0.010s 00:03:57.643 sys 0m0.036s 00:03:57.643 16:46:04 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.643 16:46:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:57.643 ************************************ 00:03:57.643 END TEST env_pci 00:03:57.643 ************************************ 00:03:57.643 16:46:04 env -- common/autotest_common.sh@1142 -- # return 0 00:03:57.643 16:46:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:57.643 16:46:04 env -- env/env.sh@15 -- # uname 00:03:57.643 16:46:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:57.643 16:46:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:57.643 16:46:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:57.643 16:46:04 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:57.643 16:46:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.643 16:46:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.643 ************************************ 00:03:57.643 START TEST env_dpdk_post_init 00:03:57.643 ************************************ 00:03:57.643 16:46:04 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:57.643 EAL: Detected CPU lcores: 96 00:03:57.643 EAL: Detected NUMA nodes: 2 00:03:57.643 EAL: Detected shared linkage of DPDK 00:03:57.643 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:57.643 EAL: Selected IOVA mode 'VA' 00:03:57.643 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.643 EAL: VFIO support initialized 00:03:57.643 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:57.901 EAL: Using IOMMU type 1 (Type 1) 00:03:57.901 EAL: Ignore mapping IO port bar(1) 00:03:57.901 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:57.901 EAL: Ignore mapping IO port bar(1) 00:03:57.901 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:57.901 EAL: Ignore mapping IO port bar(1) 00:03:57.901 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:57.901 EAL: Ignore mapping IO port bar(1) 00:03:57.901 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:57.901 EAL: Ignore mapping IO port bar(1) 00:03:57.901 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:57.901 EAL: Ignore mapping IO port bar(1) 00:03:57.901 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:57.901 EAL: Ignore mapping IO port bar(1) 00:03:57.901 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:57.901 EAL: Ignore mapping IO port bar(1) 00:03:57.901 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:58.835 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:58.835 EAL: Ignore mapping IO port bar(1) 00:03:58.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:58.835 EAL: Ignore mapping IO port bar(1) 00:03:58.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:58.835 EAL: Ignore mapping IO port bar(1) 00:03:58.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:58.835 EAL: Ignore mapping IO port bar(1) 00:03:58.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:58.835 EAL: Ignore mapping IO port bar(1) 00:03:58.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:58.835 EAL: Ignore mapping IO port bar(1) 00:03:58.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:58.835 EAL: Ignore mapping IO port bar(1) 00:03:58.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:58.835 EAL: Ignore mapping IO port bar(1) 00:03:58.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:02.122 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:02.122 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:02.122 Starting DPDK initialization... 00:04:02.122 Starting SPDK post initialization... 00:04:02.122 SPDK NVMe probe 00:04:02.122 Attaching to 0000:5e:00.0 00:04:02.122 Attached to 0000:5e:00.0 00:04:02.122 Cleaning up... 00:04:02.122 00:04:02.122 real 0m4.332s 00:04:02.122 user 0m3.294s 00:04:02.122 sys 0m0.114s 00:04:02.122 16:46:08 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.122 16:46:08 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:02.122 ************************************ 00:04:02.122 END TEST env_dpdk_post_init 00:04:02.122 ************************************ 00:04:02.122 16:46:08 env -- common/autotest_common.sh@1142 -- # return 0 00:04:02.122 16:46:08 env -- env/env.sh@26 -- # uname 00:04:02.122 16:46:08 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:02.122 16:46:08 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.122 16:46:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.122 16:46:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.122 16:46:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.122 ************************************ 00:04:02.122 START TEST env_mem_callbacks 00:04:02.122 ************************************ 00:04:02.122 16:46:08 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.122 EAL: Detected CPU lcores: 96 00:04:02.122 EAL: Detected NUMA nodes: 2 00:04:02.122 EAL: Detected shared linkage of DPDK 00:04:02.122 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:02.122 EAL: Selected IOVA mode 'VA' 00:04:02.122 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.122 EAL: VFIO support initialized 00:04:02.122 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:02.122 00:04:02.122 00:04:02.122 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.122 http://cunit.sourceforge.net/ 00:04:02.122 00:04:02.122 00:04:02.122 Suite: memory 00:04:02.122 Test: test ... 00:04:02.122 register 0x200000200000 2097152 00:04:02.122 malloc 3145728 00:04:02.122 register 0x200000400000 4194304 00:04:02.122 buf 0x200000500000 len 3145728 PASSED 00:04:02.122 malloc 64 00:04:02.122 buf 0x2000004fff40 len 64 PASSED 00:04:02.122 malloc 4194304 00:04:02.122 register 0x200000800000 6291456 00:04:02.122 buf 0x200000a00000 len 4194304 PASSED 00:04:02.122 free 0x200000500000 3145728 00:04:02.122 free 0x2000004fff40 64 00:04:02.122 unregister 0x200000400000 4194304 PASSED 00:04:02.122 free 0x200000a00000 4194304 00:04:02.122 unregister 0x200000800000 6291456 PASSED 00:04:02.122 malloc 8388608 00:04:02.122 register 0x200000400000 10485760 00:04:02.122 buf 0x200000600000 len 8388608 PASSED 00:04:02.122 free 0x200000600000 8388608 00:04:02.122 unregister 0x200000400000 10485760 PASSED 00:04:02.122 passed 00:04:02.122 00:04:02.122 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.122 suites 1 1 n/a 0 0 00:04:02.122 tests 1 1 1 0 0 00:04:02.122 asserts 15 15 15 0 n/a 00:04:02.122 00:04:02.122 Elapsed time = 0.005 seconds 00:04:02.122 00:04:02.122 real 0m0.054s 00:04:02.122 user 0m0.017s 00:04:02.122 sys 0m0.037s 00:04:02.122 16:46:08 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.122 16:46:08 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:02.122 ************************************ 00:04:02.122 END TEST env_mem_callbacks 00:04:02.122 ************************************ 00:04:02.122 16:46:08 env -- common/autotest_common.sh@1142 -- # return 0 00:04:02.122 00:04:02.122 real 0m6.089s 00:04:02.122 user 0m4.284s 00:04:02.122 sys 0m0.874s 00:04:02.122 16:46:08 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.122 16:46:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.122 ************************************ 00:04:02.122 END TEST env 00:04:02.122 ************************************ 00:04:02.122 16:46:08 -- common/autotest_common.sh@1142 -- # return 0 00:04:02.122 16:46:08 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:02.122 16:46:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.122 16:46:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.122 16:46:08 -- common/autotest_common.sh@10 -- # set +x 00:04:02.381 ************************************ 00:04:02.381 START TEST rpc 00:04:02.381 ************************************ 00:04:02.381 16:46:08 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:02.381 * Looking for test storage... 00:04:02.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:02.381 16:46:08 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4089822 00:04:02.381 16:46:08 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:02.381 16:46:08 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.381 16:46:08 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4089822 00:04:02.381 16:46:08 rpc -- common/autotest_common.sh@829 -- # '[' -z 4089822 ']' 00:04:02.381 16:46:08 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.381 16:46:08 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:02.381 16:46:08 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.381 16:46:08 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:02.381 16:46:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.381 [2024-07-15 16:46:08.946100] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:02.381 [2024-07-15 16:46:08.946140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4089822 ] 00:04:02.381 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.381 [2024-07-15 16:46:08.999559] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.640 [2024-07-15 16:46:09.074799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:02.640 [2024-07-15 16:46:09.074841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4089822' to capture a snapshot of events at runtime. 00:04:02.640 [2024-07-15 16:46:09.074848] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:02.640 [2024-07-15 16:46:09.074854] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:02.640 [2024-07-15 16:46:09.074859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4089822 for offline analysis/debug. 00:04:02.640 [2024-07-15 16:46:09.074883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.208 16:46:09 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:03.208 16:46:09 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:03.208 16:46:09 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:03.208 16:46:09 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:03.208 16:46:09 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:03.208 16:46:09 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:03.208 16:46:09 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.208 16:46:09 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.208 16:46:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.208 ************************************ 00:04:03.208 START TEST rpc_integrity 00:04:03.208 ************************************ 00:04:03.208 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:03.208 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:03.208 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.208 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.208 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.208 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:03.208 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:03.208 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:03.208 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:03.208 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.208 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.208 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.208 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:03.208 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:03.208 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.208 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.208 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.208 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:03.208 { 00:04:03.208 "name": "Malloc0", 00:04:03.208 "aliases": [ 00:04:03.208 "d97a705e-26a1-441f-a8eb-62c4bb6c490a" 00:04:03.208 ], 00:04:03.208 "product_name": "Malloc disk", 00:04:03.208 "block_size": 512, 00:04:03.208 "num_blocks": 16384, 00:04:03.208 "uuid": "d97a705e-26a1-441f-a8eb-62c4bb6c490a", 00:04:03.208 "assigned_rate_limits": { 00:04:03.208 "rw_ios_per_sec": 0, 00:04:03.208 "rw_mbytes_per_sec": 0, 00:04:03.208 "r_mbytes_per_sec": 0, 00:04:03.208 "w_mbytes_per_sec": 0 00:04:03.208 }, 00:04:03.208 "claimed": false, 00:04:03.208 "zoned": false, 00:04:03.208 "supported_io_types": { 00:04:03.208 "read": true, 00:04:03.208 "write": true, 00:04:03.208 "unmap": true, 00:04:03.208 "flush": true, 00:04:03.208 "reset": true, 00:04:03.208 "nvme_admin": false, 00:04:03.208 "nvme_io": false, 00:04:03.208 "nvme_io_md": false, 00:04:03.208 "write_zeroes": true, 00:04:03.208 "zcopy": true, 00:04:03.208 "get_zone_info": false, 00:04:03.208 "zone_management": false, 00:04:03.208 "zone_append": false, 00:04:03.208 "compare": false, 00:04:03.208 "compare_and_write": false, 00:04:03.208 "abort": true, 00:04:03.208 "seek_hole": false, 00:04:03.208 "seek_data": false, 00:04:03.208 "copy": true, 00:04:03.208 "nvme_iov_md": false 00:04:03.208 }, 00:04:03.208 "memory_domains": [ 00:04:03.208 { 00:04:03.208 "dma_device_id": "system", 00:04:03.208 "dma_device_type": 1 00:04:03.209 }, 00:04:03.209 { 00:04:03.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.209 "dma_device_type": 2 00:04:03.209 } 00:04:03.209 ], 00:04:03.209 "driver_specific": {} 00:04:03.209 } 00:04:03.209 ]' 00:04:03.209 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:03.467 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:03.467 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:03.467 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.467 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.467 [2024-07-15 16:46:09.903802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:03.467 [2024-07-15 16:46:09.903837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:03.467 [2024-07-15 16:46:09.903851] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24a62d0 00:04:03.467 [2024-07-15 16:46:09.903857] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:03.467 [2024-07-15 16:46:09.904928] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:03.467 [2024-07-15 16:46:09.904951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:03.467 Passthru0 00:04:03.467 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.467 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:03.467 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.467 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.467 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.467 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.467 { 00:04:03.467 "name": "Malloc0", 00:04:03.467 "aliases": [ 00:04:03.467 "d97a705e-26a1-441f-a8eb-62c4bb6c490a" 00:04:03.467 ], 00:04:03.467 "product_name": "Malloc disk", 00:04:03.467 "block_size": 512, 00:04:03.467 "num_blocks": 16384, 00:04:03.467 "uuid": "d97a705e-26a1-441f-a8eb-62c4bb6c490a", 00:04:03.467 "assigned_rate_limits": { 00:04:03.467 "rw_ios_per_sec": 0, 00:04:03.467 "rw_mbytes_per_sec": 0, 00:04:03.467 "r_mbytes_per_sec": 0, 00:04:03.467 "w_mbytes_per_sec": 0 00:04:03.467 }, 00:04:03.467 "claimed": true, 00:04:03.467 "claim_type": "exclusive_write", 00:04:03.467 "zoned": false, 00:04:03.467 "supported_io_types": { 00:04:03.467 "read": true, 00:04:03.467 "write": true, 00:04:03.467 "unmap": true, 00:04:03.467 "flush": true, 00:04:03.467 "reset": true, 00:04:03.467 "nvme_admin": false, 00:04:03.467 "nvme_io": false, 00:04:03.467 "nvme_io_md": false, 00:04:03.468 "write_zeroes": true, 00:04:03.468 "zcopy": true, 00:04:03.468 "get_zone_info": false, 00:04:03.468 "zone_management": false, 00:04:03.468 "zone_append": false, 00:04:03.468 "compare": false, 00:04:03.468 "compare_and_write": false, 00:04:03.468 "abort": true, 00:04:03.468 "seek_hole": false, 00:04:03.468 "seek_data": false, 00:04:03.468 "copy": true, 00:04:03.468 "nvme_iov_md": false 00:04:03.468 }, 00:04:03.468 "memory_domains": [ 00:04:03.468 { 00:04:03.468 "dma_device_id": "system", 00:04:03.468 "dma_device_type": 1 00:04:03.468 }, 00:04:03.468 { 00:04:03.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.468 "dma_device_type": 2 00:04:03.468 } 00:04:03.468 ], 00:04:03.468 "driver_specific": {} 00:04:03.468 }, 00:04:03.468 { 00:04:03.468 "name": "Passthru0", 00:04:03.468 "aliases": [ 00:04:03.468 "318f95e6-64b0-5c8f-811f-caa751a6e528" 00:04:03.468 ], 00:04:03.468 "product_name": "passthru", 00:04:03.468 "block_size": 512, 00:04:03.468 "num_blocks": 16384, 00:04:03.468 "uuid": "318f95e6-64b0-5c8f-811f-caa751a6e528", 00:04:03.468 "assigned_rate_limits": { 00:04:03.468 "rw_ios_per_sec": 0, 00:04:03.468 "rw_mbytes_per_sec": 0, 00:04:03.468 "r_mbytes_per_sec": 0, 00:04:03.468 "w_mbytes_per_sec": 0 00:04:03.468 }, 00:04:03.468 "claimed": false, 00:04:03.468 "zoned": false, 00:04:03.468 "supported_io_types": { 00:04:03.468 "read": true, 00:04:03.468 "write": true, 00:04:03.468 "unmap": true, 00:04:03.468 "flush": true, 00:04:03.468 "reset": true, 00:04:03.468 "nvme_admin": false, 00:04:03.468 "nvme_io": false, 00:04:03.468 "nvme_io_md": false, 00:04:03.468 "write_zeroes": true, 00:04:03.468 "zcopy": true, 00:04:03.468 "get_zone_info": false, 00:04:03.468 "zone_management": false, 00:04:03.468 "zone_append": false, 00:04:03.468 "compare": false, 00:04:03.468 "compare_and_write": false, 00:04:03.468 "abort": true, 00:04:03.468 "seek_hole": false, 00:04:03.468 "seek_data": false, 00:04:03.468 "copy": true, 00:04:03.468 "nvme_iov_md": false 00:04:03.468 }, 00:04:03.468 "memory_domains": [ 00:04:03.468 { 00:04:03.468 "dma_device_id": "system", 00:04:03.468 "dma_device_type": 1 00:04:03.468 }, 00:04:03.468 { 00:04:03.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.468 "dma_device_type": 2 00:04:03.468 } 00:04:03.468 ], 00:04:03.468 "driver_specific": { 00:04:03.468 "passthru": { 00:04:03.468 "name": "Passthru0", 00:04:03.468 "base_bdev_name": "Malloc0" 00:04:03.468 } 00:04:03.468 } 00:04:03.468 } 00:04:03.468 ]' 00:04:03.468 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:03.468 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.468 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.468 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.468 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.468 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.468 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:03.468 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.468 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.468 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.468 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.468 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.468 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.468 16:46:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.468 16:46:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.468 16:46:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:03.468 16:46:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.468 00:04:03.468 real 0m0.273s 00:04:03.468 user 0m0.174s 00:04:03.468 sys 0m0.036s 00:04:03.468 16:46:10 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.468 16:46:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.468 ************************************ 00:04:03.468 END TEST rpc_integrity 00:04:03.468 ************************************ 00:04:03.468 16:46:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:03.468 16:46:10 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:03.468 16:46:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.468 16:46:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.468 16:46:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.468 ************************************ 00:04:03.468 START TEST rpc_plugins 00:04:03.468 ************************************ 00:04:03.468 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:03.468 16:46:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:03.468 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.468 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.468 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.468 16:46:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:03.468 16:46:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:03.468 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.468 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.727 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.727 16:46:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:03.727 { 00:04:03.727 "name": "Malloc1", 00:04:03.727 "aliases": [ 00:04:03.727 "e6115de2-c101-4369-8513-b26c3b0401c2" 00:04:03.727 ], 00:04:03.727 "product_name": "Malloc disk", 00:04:03.727 "block_size": 4096, 00:04:03.727 "num_blocks": 256, 00:04:03.727 "uuid": "e6115de2-c101-4369-8513-b26c3b0401c2", 00:04:03.727 "assigned_rate_limits": { 00:04:03.727 "rw_ios_per_sec": 0, 00:04:03.727 "rw_mbytes_per_sec": 0, 00:04:03.727 "r_mbytes_per_sec": 0, 00:04:03.727 "w_mbytes_per_sec": 0 00:04:03.727 }, 00:04:03.727 "claimed": false, 00:04:03.727 "zoned": false, 00:04:03.727 "supported_io_types": { 00:04:03.727 "read": true, 00:04:03.727 "write": true, 00:04:03.727 "unmap": true, 00:04:03.727 "flush": true, 00:04:03.727 "reset": true, 00:04:03.727 "nvme_admin": false, 00:04:03.727 "nvme_io": false, 00:04:03.727 "nvme_io_md": false, 00:04:03.727 "write_zeroes": true, 00:04:03.727 "zcopy": true, 00:04:03.727 "get_zone_info": false, 00:04:03.727 "zone_management": false, 00:04:03.727 "zone_append": false, 00:04:03.727 "compare": false, 00:04:03.727 "compare_and_write": false, 00:04:03.727 "abort": true, 00:04:03.727 "seek_hole": false, 00:04:03.727 "seek_data": false, 00:04:03.727 "copy": true, 00:04:03.727 "nvme_iov_md": false 00:04:03.727 }, 00:04:03.727 "memory_domains": [ 00:04:03.727 { 00:04:03.727 "dma_device_id": "system", 00:04:03.727 "dma_device_type": 1 00:04:03.727 }, 00:04:03.727 { 00:04:03.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.727 "dma_device_type": 2 00:04:03.727 } 00:04:03.727 ], 00:04:03.727 "driver_specific": {} 00:04:03.727 } 00:04:03.727 ]' 00:04:03.727 16:46:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:03.727 16:46:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:03.727 16:46:10 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:03.727 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.727 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.727 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.727 16:46:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:03.727 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.727 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.727 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.727 16:46:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:03.727 16:46:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:03.727 16:46:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:03.727 00:04:03.727 real 0m0.136s 00:04:03.727 user 0m0.088s 00:04:03.727 sys 0m0.016s 00:04:03.727 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.727 16:46:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.727 ************************************ 00:04:03.727 END TEST rpc_plugins 00:04:03.727 ************************************ 00:04:03.727 16:46:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:03.727 16:46:10 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:03.727 16:46:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.727 16:46:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.727 16:46:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.727 ************************************ 00:04:03.727 START TEST rpc_trace_cmd_test 00:04:03.727 ************************************ 00:04:03.727 16:46:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:03.727 16:46:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:03.727 16:46:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:03.727 16:46:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.727 16:46:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.727 16:46:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.727 16:46:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:03.727 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4089822", 00:04:03.727 "tpoint_group_mask": "0x8", 00:04:03.727 "iscsi_conn": { 00:04:03.727 "mask": "0x2", 00:04:03.727 "tpoint_mask": "0x0" 00:04:03.727 }, 00:04:03.727 "scsi": { 00:04:03.727 "mask": "0x4", 00:04:03.727 "tpoint_mask": "0x0" 00:04:03.727 }, 00:04:03.727 "bdev": { 00:04:03.727 "mask": "0x8", 00:04:03.727 "tpoint_mask": "0xffffffffffffffff" 00:04:03.727 }, 00:04:03.727 "nvmf_rdma": { 00:04:03.727 "mask": "0x10", 00:04:03.727 "tpoint_mask": "0x0" 00:04:03.727 }, 00:04:03.727 "nvmf_tcp": { 00:04:03.727 "mask": "0x20", 00:04:03.727 "tpoint_mask": "0x0" 00:04:03.727 }, 00:04:03.727 "ftl": { 00:04:03.727 "mask": "0x40", 00:04:03.727 "tpoint_mask": "0x0" 00:04:03.727 }, 00:04:03.727 "blobfs": { 00:04:03.727 "mask": "0x80", 00:04:03.727 "tpoint_mask": "0x0" 00:04:03.727 }, 00:04:03.727 "dsa": { 00:04:03.727 "mask": "0x200", 00:04:03.727 "tpoint_mask": "0x0" 00:04:03.727 }, 00:04:03.728 "thread": { 00:04:03.728 "mask": "0x400", 00:04:03.728 "tpoint_mask": "0x0" 00:04:03.728 }, 00:04:03.728 "nvme_pcie": { 00:04:03.728 "mask": "0x800", 00:04:03.728 "tpoint_mask": "0x0" 00:04:03.728 }, 00:04:03.728 "iaa": { 00:04:03.728 "mask": "0x1000", 00:04:03.728 "tpoint_mask": "0x0" 00:04:03.728 }, 00:04:03.728 "nvme_tcp": { 00:04:03.728 "mask": "0x2000", 00:04:03.728 "tpoint_mask": "0x0" 00:04:03.728 }, 00:04:03.728 "bdev_nvme": { 00:04:03.728 "mask": "0x4000", 00:04:03.728 "tpoint_mask": "0x0" 00:04:03.728 }, 00:04:03.728 "sock": { 00:04:03.728 "mask": "0x8000", 00:04:03.728 "tpoint_mask": "0x0" 00:04:03.728 } 00:04:03.728 }' 00:04:03.728 16:46:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:03.728 16:46:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:03.728 16:46:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:03.987 16:46:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:03.987 16:46:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:03.987 16:46:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:03.987 16:46:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:03.987 16:46:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:03.987 16:46:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:03.987 16:46:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:03.987 00:04:03.987 real 0m0.190s 00:04:03.987 user 0m0.155s 00:04:03.987 sys 0m0.027s 00:04:03.987 16:46:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.987 16:46:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.987 ************************************ 00:04:03.987 END TEST rpc_trace_cmd_test 00:04:03.987 ************************************ 00:04:03.987 16:46:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:03.987 16:46:10 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:03.987 16:46:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:03.987 16:46:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:03.987 16:46:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.987 16:46:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.987 16:46:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.987 ************************************ 00:04:03.987 START TEST rpc_daemon_integrity 00:04:03.987 ************************************ 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:03.987 { 00:04:03.987 "name": "Malloc2", 00:04:03.987 "aliases": [ 00:04:03.987 "38e8743d-7035-4bdd-8c1b-061a6be2a6b9" 00:04:03.987 ], 00:04:03.987 "product_name": "Malloc disk", 00:04:03.987 "block_size": 512, 00:04:03.987 "num_blocks": 16384, 00:04:03.987 "uuid": "38e8743d-7035-4bdd-8c1b-061a6be2a6b9", 00:04:03.987 "assigned_rate_limits": { 00:04:03.987 "rw_ios_per_sec": 0, 00:04:03.987 "rw_mbytes_per_sec": 0, 00:04:03.987 "r_mbytes_per_sec": 0, 00:04:03.987 "w_mbytes_per_sec": 0 00:04:03.987 }, 00:04:03.987 "claimed": false, 00:04:03.987 "zoned": false, 00:04:03.987 "supported_io_types": { 00:04:03.987 "read": true, 00:04:03.987 "write": true, 00:04:03.987 "unmap": true, 00:04:03.987 "flush": true, 00:04:03.987 "reset": true, 00:04:03.987 "nvme_admin": false, 00:04:03.987 "nvme_io": false, 00:04:03.987 "nvme_io_md": false, 00:04:03.987 "write_zeroes": true, 00:04:03.987 "zcopy": true, 00:04:03.987 "get_zone_info": false, 00:04:03.987 "zone_management": false, 00:04:03.987 "zone_append": false, 00:04:03.987 "compare": false, 00:04:03.987 "compare_and_write": false, 00:04:03.987 "abort": true, 00:04:03.987 "seek_hole": false, 00:04:03.987 "seek_data": false, 00:04:03.987 "copy": true, 00:04:03.987 "nvme_iov_md": false 00:04:03.987 }, 00:04:03.987 "memory_domains": [ 00:04:03.987 { 00:04:03.987 "dma_device_id": "system", 00:04:03.987 "dma_device_type": 1 00:04:03.987 }, 00:04:03.987 { 00:04:03.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.987 "dma_device_type": 2 00:04:03.987 } 00:04:03.987 ], 00:04:03.987 "driver_specific": {} 00:04:03.987 } 00:04:03.987 ]' 00:04:03.987 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:04.246 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:04.246 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:04.246 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.246 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.246 [2024-07-15 16:46:10.697979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:04.246 [2024-07-15 16:46:10.698014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:04.247 [2024-07-15 16:46:10.698029] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x263dac0 00:04:04.247 [2024-07-15 16:46:10.698035] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:04.247 [2024-07-15 16:46:10.698997] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:04.247 [2024-07-15 16:46:10.699020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:04.247 Passthru0 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:04.247 { 00:04:04.247 "name": "Malloc2", 00:04:04.247 "aliases": [ 00:04:04.247 "38e8743d-7035-4bdd-8c1b-061a6be2a6b9" 00:04:04.247 ], 00:04:04.247 "product_name": "Malloc disk", 00:04:04.247 "block_size": 512, 00:04:04.247 "num_blocks": 16384, 00:04:04.247 "uuid": "38e8743d-7035-4bdd-8c1b-061a6be2a6b9", 00:04:04.247 "assigned_rate_limits": { 00:04:04.247 "rw_ios_per_sec": 0, 00:04:04.247 "rw_mbytes_per_sec": 0, 00:04:04.247 "r_mbytes_per_sec": 0, 00:04:04.247 "w_mbytes_per_sec": 0 00:04:04.247 }, 00:04:04.247 "claimed": true, 00:04:04.247 "claim_type": "exclusive_write", 00:04:04.247 "zoned": false, 00:04:04.247 "supported_io_types": { 00:04:04.247 "read": true, 00:04:04.247 "write": true, 00:04:04.247 "unmap": true, 00:04:04.247 "flush": true, 00:04:04.247 "reset": true, 00:04:04.247 "nvme_admin": false, 00:04:04.247 "nvme_io": false, 00:04:04.247 "nvme_io_md": false, 00:04:04.247 "write_zeroes": true, 00:04:04.247 "zcopy": true, 00:04:04.247 "get_zone_info": false, 00:04:04.247 "zone_management": false, 00:04:04.247 "zone_append": false, 00:04:04.247 "compare": false, 00:04:04.247 "compare_and_write": false, 00:04:04.247 "abort": true, 00:04:04.247 "seek_hole": false, 00:04:04.247 "seek_data": false, 00:04:04.247 "copy": true, 00:04:04.247 "nvme_iov_md": false 00:04:04.247 }, 00:04:04.247 "memory_domains": [ 00:04:04.247 { 00:04:04.247 "dma_device_id": "system", 00:04:04.247 "dma_device_type": 1 00:04:04.247 }, 00:04:04.247 { 00:04:04.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.247 "dma_device_type": 2 00:04:04.247 } 00:04:04.247 ], 00:04:04.247 "driver_specific": {} 00:04:04.247 }, 00:04:04.247 { 00:04:04.247 "name": "Passthru0", 00:04:04.247 "aliases": [ 00:04:04.247 "d4207ba1-837d-560d-9cd2-023446ae8276" 00:04:04.247 ], 00:04:04.247 "product_name": "passthru", 00:04:04.247 "block_size": 512, 00:04:04.247 "num_blocks": 16384, 00:04:04.247 "uuid": "d4207ba1-837d-560d-9cd2-023446ae8276", 00:04:04.247 "assigned_rate_limits": { 00:04:04.247 "rw_ios_per_sec": 0, 00:04:04.247 "rw_mbytes_per_sec": 0, 00:04:04.247 "r_mbytes_per_sec": 0, 00:04:04.247 "w_mbytes_per_sec": 0 00:04:04.247 }, 00:04:04.247 "claimed": false, 00:04:04.247 "zoned": false, 00:04:04.247 "supported_io_types": { 00:04:04.247 "read": true, 00:04:04.247 "write": true, 00:04:04.247 "unmap": true, 00:04:04.247 "flush": true, 00:04:04.247 "reset": true, 00:04:04.247 "nvme_admin": false, 00:04:04.247 "nvme_io": false, 00:04:04.247 "nvme_io_md": false, 00:04:04.247 "write_zeroes": true, 00:04:04.247 "zcopy": true, 00:04:04.247 "get_zone_info": false, 00:04:04.247 "zone_management": false, 00:04:04.247 "zone_append": false, 00:04:04.247 "compare": false, 00:04:04.247 "compare_and_write": false, 00:04:04.247 "abort": true, 00:04:04.247 "seek_hole": false, 00:04:04.247 "seek_data": false, 00:04:04.247 "copy": true, 00:04:04.247 "nvme_iov_md": false 00:04:04.247 }, 00:04:04.247 "memory_domains": [ 00:04:04.247 { 00:04:04.247 "dma_device_id": "system", 00:04:04.247 "dma_device_type": 1 00:04:04.247 }, 00:04:04.247 { 00:04:04.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.247 "dma_device_type": 2 00:04:04.247 } 00:04:04.247 ], 00:04:04.247 "driver_specific": { 00:04:04.247 "passthru": { 00:04:04.247 "name": "Passthru0", 00:04:04.247 "base_bdev_name": "Malloc2" 00:04:04.247 } 00:04:04.247 } 00:04:04.247 } 00:04:04.247 ]' 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:04.247 00:04:04.247 real 0m0.268s 00:04:04.247 user 0m0.163s 00:04:04.247 sys 0m0.042s 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.247 16:46:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.247 ************************************ 00:04:04.247 END TEST rpc_daemon_integrity 00:04:04.247 ************************************ 00:04:04.247 16:46:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:04.247 16:46:10 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:04.247 16:46:10 rpc -- rpc/rpc.sh@84 -- # killprocess 4089822 00:04:04.247 16:46:10 rpc -- common/autotest_common.sh@948 -- # '[' -z 4089822 ']' 00:04:04.247 16:46:10 rpc -- common/autotest_common.sh@952 -- # kill -0 4089822 00:04:04.247 16:46:10 rpc -- common/autotest_common.sh@953 -- # uname 00:04:04.247 16:46:10 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:04.247 16:46:10 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4089822 00:04:04.247 16:46:10 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:04.248 16:46:10 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:04.248 16:46:10 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4089822' 00:04:04.248 killing process with pid 4089822 00:04:04.248 16:46:10 rpc -- common/autotest_common.sh@967 -- # kill 4089822 00:04:04.248 16:46:10 rpc -- common/autotest_common.sh@972 -- # wait 4089822 00:04:04.816 00:04:04.816 real 0m2.404s 00:04:04.816 user 0m3.106s 00:04:04.816 sys 0m0.635s 00:04:04.816 16:46:11 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.816 16:46:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.816 ************************************ 00:04:04.816 END TEST rpc 00:04:04.816 ************************************ 00:04:04.816 16:46:11 -- common/autotest_common.sh@1142 -- # return 0 00:04:04.816 16:46:11 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:04.816 16:46:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.816 16:46:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.816 16:46:11 -- common/autotest_common.sh@10 -- # set +x 00:04:04.816 ************************************ 00:04:04.816 START TEST skip_rpc 00:04:04.816 ************************************ 00:04:04.816 16:46:11 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:04.816 * Looking for test storage... 00:04:04.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:04.816 16:46:11 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.816 16:46:11 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:04.816 16:46:11 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:04.816 16:46:11 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.816 16:46:11 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.816 16:46:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.816 ************************************ 00:04:04.816 START TEST skip_rpc 00:04:04.816 ************************************ 00:04:04.816 16:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:04.816 16:46:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:04.816 16:46:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4090451 00:04:04.816 16:46:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.816 16:46:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:04.816 [2024-07-15 16:46:11.448094] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:04.816 [2024-07-15 16:46:11.448135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090451 ] 00:04:04.816 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.075 [2024-07-15 16:46:11.495866] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.075 [2024-07-15 16:46:11.567647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.386 16:46:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:10.386 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:10.386 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:10.386 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:10.386 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:10.386 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4090451 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 4090451 ']' 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 4090451 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4090451 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4090451' 00:04:10.387 killing process with pid 4090451 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 4090451 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 4090451 00:04:10.387 00:04:10.387 real 0m5.363s 00:04:10.387 user 0m5.161s 00:04:10.387 sys 0m0.233s 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.387 16:46:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.387 ************************************ 00:04:10.387 END TEST skip_rpc 00:04:10.387 ************************************ 00:04:10.387 16:46:16 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:10.387 16:46:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:10.387 16:46:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.387 16:46:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.387 16:46:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.387 ************************************ 00:04:10.387 START TEST skip_rpc_with_json 00:04:10.387 ************************************ 00:04:10.387 16:46:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:10.387 16:46:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:10.387 16:46:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4091402 00:04:10.387 16:46:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.387 16:46:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:10.387 16:46:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4091402 00:04:10.387 16:46:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 4091402 ']' 00:04:10.387 16:46:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.387 16:46:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:10.387 16:46:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.387 16:46:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:10.387 16:46:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.387 [2024-07-15 16:46:16.887979] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:10.387 [2024-07-15 16:46:16.888018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091402 ] 00:04:10.387 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.387 [2024-07-15 16:46:16.939952] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.387 [2024-07-15 16:46:17.019531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.324 [2024-07-15 16:46:17.692576] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:11.324 request: 00:04:11.324 { 00:04:11.324 "trtype": "tcp", 00:04:11.324 "method": "nvmf_get_transports", 00:04:11.324 "req_id": 1 00:04:11.324 } 00:04:11.324 Got JSON-RPC error response 00:04:11.324 response: 00:04:11.324 { 00:04:11.324 "code": -19, 00:04:11.324 "message": "No such device" 00:04:11.324 } 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.324 [2024-07-15 16:46:17.700670] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.324 16:46:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:11.324 { 00:04:11.324 "subsystems": [ 00:04:11.324 { 00:04:11.324 "subsystem": "vfio_user_target", 00:04:11.324 "config": null 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "subsystem": "keyring", 00:04:11.324 "config": [] 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "subsystem": "iobuf", 00:04:11.324 "config": [ 00:04:11.324 { 00:04:11.324 "method": "iobuf_set_options", 00:04:11.324 "params": { 00:04:11.324 "small_pool_count": 8192, 00:04:11.324 "large_pool_count": 1024, 00:04:11.324 "small_bufsize": 8192, 00:04:11.324 "large_bufsize": 135168 00:04:11.324 } 00:04:11.324 } 00:04:11.324 ] 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "subsystem": "sock", 00:04:11.324 "config": [ 00:04:11.324 { 00:04:11.324 "method": "sock_set_default_impl", 00:04:11.324 "params": { 00:04:11.324 "impl_name": "posix" 00:04:11.324 } 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "method": "sock_impl_set_options", 00:04:11.324 "params": { 00:04:11.324 "impl_name": "ssl", 00:04:11.324 "recv_buf_size": 4096, 00:04:11.324 "send_buf_size": 4096, 00:04:11.324 "enable_recv_pipe": true, 00:04:11.324 "enable_quickack": false, 00:04:11.324 "enable_placement_id": 0, 00:04:11.324 "enable_zerocopy_send_server": true, 00:04:11.324 "enable_zerocopy_send_client": false, 00:04:11.324 "zerocopy_threshold": 0, 00:04:11.324 "tls_version": 0, 00:04:11.324 "enable_ktls": false 00:04:11.324 } 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "method": "sock_impl_set_options", 00:04:11.324 "params": { 00:04:11.324 "impl_name": "posix", 00:04:11.324 "recv_buf_size": 2097152, 00:04:11.324 "send_buf_size": 2097152, 00:04:11.324 "enable_recv_pipe": true, 00:04:11.324 "enable_quickack": false, 00:04:11.324 "enable_placement_id": 0, 00:04:11.324 "enable_zerocopy_send_server": true, 00:04:11.324 "enable_zerocopy_send_client": false, 00:04:11.324 "zerocopy_threshold": 0, 00:04:11.324 "tls_version": 0, 00:04:11.324 "enable_ktls": false 00:04:11.324 } 00:04:11.324 } 00:04:11.324 ] 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "subsystem": "vmd", 00:04:11.324 "config": [] 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "subsystem": "accel", 00:04:11.324 "config": [ 00:04:11.324 { 00:04:11.324 "method": "accel_set_options", 00:04:11.324 "params": { 00:04:11.324 "small_cache_size": 128, 00:04:11.324 "large_cache_size": 16, 00:04:11.324 "task_count": 2048, 00:04:11.324 "sequence_count": 2048, 00:04:11.324 "buf_count": 2048 00:04:11.324 } 00:04:11.324 } 00:04:11.324 ] 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "subsystem": "bdev", 00:04:11.324 "config": [ 00:04:11.324 { 00:04:11.324 "method": "bdev_set_options", 00:04:11.324 "params": { 00:04:11.324 "bdev_io_pool_size": 65535, 00:04:11.324 "bdev_io_cache_size": 256, 00:04:11.324 "bdev_auto_examine": true, 00:04:11.324 "iobuf_small_cache_size": 128, 00:04:11.324 "iobuf_large_cache_size": 16 00:04:11.324 } 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "method": "bdev_raid_set_options", 00:04:11.324 "params": { 00:04:11.324 "process_window_size_kb": 1024 00:04:11.324 } 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "method": "bdev_iscsi_set_options", 00:04:11.324 "params": { 00:04:11.324 "timeout_sec": 30 00:04:11.324 } 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "method": "bdev_nvme_set_options", 00:04:11.324 "params": { 00:04:11.324 "action_on_timeout": "none", 00:04:11.324 "timeout_us": 0, 00:04:11.324 "timeout_admin_us": 0, 00:04:11.324 "keep_alive_timeout_ms": 10000, 00:04:11.324 "arbitration_burst": 0, 00:04:11.324 "low_priority_weight": 0, 00:04:11.324 "medium_priority_weight": 0, 00:04:11.324 "high_priority_weight": 0, 00:04:11.324 "nvme_adminq_poll_period_us": 10000, 00:04:11.324 "nvme_ioq_poll_period_us": 0, 00:04:11.324 "io_queue_requests": 0, 00:04:11.324 "delay_cmd_submit": true, 00:04:11.324 "transport_retry_count": 4, 00:04:11.324 "bdev_retry_count": 3, 00:04:11.324 "transport_ack_timeout": 0, 00:04:11.324 "ctrlr_loss_timeout_sec": 0, 00:04:11.324 "reconnect_delay_sec": 0, 00:04:11.324 "fast_io_fail_timeout_sec": 0, 00:04:11.324 "disable_auto_failback": false, 00:04:11.324 "generate_uuids": false, 00:04:11.324 "transport_tos": 0, 00:04:11.324 "nvme_error_stat": false, 00:04:11.324 "rdma_srq_size": 0, 00:04:11.324 "io_path_stat": false, 00:04:11.324 "allow_accel_sequence": false, 00:04:11.324 "rdma_max_cq_size": 0, 00:04:11.324 "rdma_cm_event_timeout_ms": 0, 00:04:11.324 "dhchap_digests": [ 00:04:11.324 "sha256", 00:04:11.324 "sha384", 00:04:11.324 "sha512" 00:04:11.324 ], 00:04:11.324 "dhchap_dhgroups": [ 00:04:11.324 "null", 00:04:11.324 "ffdhe2048", 00:04:11.324 "ffdhe3072", 00:04:11.324 "ffdhe4096", 00:04:11.324 "ffdhe6144", 00:04:11.324 "ffdhe8192" 00:04:11.324 ] 00:04:11.324 } 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "method": "bdev_nvme_set_hotplug", 00:04:11.324 "params": { 00:04:11.324 "period_us": 100000, 00:04:11.324 "enable": false 00:04:11.324 } 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "method": "bdev_wait_for_examine" 00:04:11.324 } 00:04:11.324 ] 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "subsystem": "scsi", 00:04:11.324 "config": null 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "subsystem": "scheduler", 00:04:11.324 "config": [ 00:04:11.324 { 00:04:11.324 "method": "framework_set_scheduler", 00:04:11.324 "params": { 00:04:11.324 "name": "static" 00:04:11.324 } 00:04:11.324 } 00:04:11.324 ] 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "subsystem": "vhost_scsi", 00:04:11.324 "config": [] 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "subsystem": "vhost_blk", 00:04:11.324 "config": [] 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "subsystem": "ublk", 00:04:11.324 "config": [] 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "subsystem": "nbd", 00:04:11.324 "config": [] 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "subsystem": "nvmf", 00:04:11.324 "config": [ 00:04:11.324 { 00:04:11.324 "method": "nvmf_set_config", 00:04:11.324 "params": { 00:04:11.324 "discovery_filter": "match_any", 00:04:11.324 "admin_cmd_passthru": { 00:04:11.324 "identify_ctrlr": false 00:04:11.324 } 00:04:11.324 } 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "method": "nvmf_set_max_subsystems", 00:04:11.324 "params": { 00:04:11.324 "max_subsystems": 1024 00:04:11.324 } 00:04:11.324 }, 00:04:11.324 { 00:04:11.324 "method": "nvmf_set_crdt", 00:04:11.324 "params": { 00:04:11.324 "crdt1": 0, 00:04:11.324 "crdt2": 0, 00:04:11.325 "crdt3": 0 00:04:11.325 } 00:04:11.325 }, 00:04:11.325 { 00:04:11.325 "method": "nvmf_create_transport", 00:04:11.325 "params": { 00:04:11.325 "trtype": "TCP", 00:04:11.325 "max_queue_depth": 128, 00:04:11.325 "max_io_qpairs_per_ctrlr": 127, 00:04:11.325 "in_capsule_data_size": 4096, 00:04:11.325 "max_io_size": 131072, 00:04:11.325 "io_unit_size": 131072, 00:04:11.325 "max_aq_depth": 128, 00:04:11.325 "num_shared_buffers": 511, 00:04:11.325 "buf_cache_size": 4294967295, 00:04:11.325 "dif_insert_or_strip": false, 00:04:11.325 "zcopy": false, 00:04:11.325 "c2h_success": true, 00:04:11.325 "sock_priority": 0, 00:04:11.325 "abort_timeout_sec": 1, 00:04:11.325 "ack_timeout": 0, 00:04:11.325 "data_wr_pool_size": 0 00:04:11.325 } 00:04:11.325 } 00:04:11.325 ] 00:04:11.325 }, 00:04:11.325 { 00:04:11.325 "subsystem": "iscsi", 00:04:11.325 "config": [ 00:04:11.325 { 00:04:11.325 "method": "iscsi_set_options", 00:04:11.325 "params": { 00:04:11.325 "node_base": "iqn.2016-06.io.spdk", 00:04:11.325 "max_sessions": 128, 00:04:11.325 "max_connections_per_session": 2, 00:04:11.325 "max_queue_depth": 64, 00:04:11.325 "default_time2wait": 2, 00:04:11.325 "default_time2retain": 20, 00:04:11.325 "first_burst_length": 8192, 00:04:11.325 "immediate_data": true, 00:04:11.325 "allow_duplicated_isid": false, 00:04:11.325 "error_recovery_level": 0, 00:04:11.325 "nop_timeout": 60, 00:04:11.325 "nop_in_interval": 30, 00:04:11.325 "disable_chap": false, 00:04:11.325 "require_chap": false, 00:04:11.325 "mutual_chap": false, 00:04:11.325 "chap_group": 0, 00:04:11.325 "max_large_datain_per_connection": 64, 00:04:11.325 "max_r2t_per_connection": 4, 00:04:11.325 "pdu_pool_size": 36864, 00:04:11.325 "immediate_data_pool_size": 16384, 00:04:11.325 "data_out_pool_size": 2048 00:04:11.325 } 00:04:11.325 } 00:04:11.325 ] 00:04:11.325 } 00:04:11.325 ] 00:04:11.325 } 00:04:11.325 16:46:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:11.325 16:46:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4091402 00:04:11.325 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 4091402 ']' 00:04:11.325 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 4091402 00:04:11.325 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:11.325 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:11.325 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4091402 00:04:11.325 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:11.325 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:11.325 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4091402' 00:04:11.325 killing process with pid 4091402 00:04:11.325 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 4091402 00:04:11.325 16:46:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 4091402 00:04:11.584 16:46:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4091644 00:04:11.584 16:46:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:11.584 16:46:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:16.850 16:46:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4091644 00:04:16.850 16:46:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 4091644 ']' 00:04:16.850 16:46:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 4091644 00:04:16.850 16:46:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:16.850 16:46:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:16.850 16:46:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4091644 00:04:16.850 16:46:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:16.850 16:46:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:16.850 16:46:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4091644' 00:04:16.850 killing process with pid 4091644 00:04:16.850 16:46:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 4091644 00:04:16.850 16:46:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 4091644 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:17.108 00:04:17.108 real 0m6.741s 00:04:17.108 user 0m6.575s 00:04:17.108 sys 0m0.583s 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.108 ************************************ 00:04:17.108 END TEST skip_rpc_with_json 00:04:17.108 ************************************ 00:04:17.108 16:46:23 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:17.108 16:46:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:17.108 16:46:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.108 16:46:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.108 16:46:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.108 ************************************ 00:04:17.108 START TEST skip_rpc_with_delay 00:04:17.108 ************************************ 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.108 [2024-07-15 16:46:23.686045] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:17.108 [2024-07-15 16:46:23.686105] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:17.108 00:04:17.108 real 0m0.052s 00:04:17.108 user 0m0.037s 00:04:17.108 sys 0m0.015s 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.108 16:46:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:17.108 ************************************ 00:04:17.108 END TEST skip_rpc_with_delay 00:04:17.108 ************************************ 00:04:17.108 16:46:23 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:17.108 16:46:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:17.108 16:46:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:17.108 16:46:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:17.108 16:46:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.108 16:46:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.108 16:46:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.108 ************************************ 00:04:17.108 START TEST exit_on_failed_rpc_init 00:04:17.108 ************************************ 00:04:17.108 16:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:17.108 16:46:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:17.108 16:46:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4092611 00:04:17.108 16:46:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4092611 00:04:17.108 16:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 4092611 ']' 00:04:17.108 16:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.108 16:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.108 16:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.109 16:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.109 16:46:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.367 [2024-07-15 16:46:23.802436] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:17.367 [2024-07-15 16:46:23.802475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092611 ] 00:04:17.367 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.367 [2024-07-15 16:46:23.855245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.367 [2024-07-15 16:46:23.933616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:18.305 [2024-07-15 16:46:24.653536] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:18.305 [2024-07-15 16:46:24.653581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092841 ] 00:04:18.305 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.305 [2024-07-15 16:46:24.705329] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.305 [2024-07-15 16:46:24.778709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.305 [2024-07-15 16:46:24.778772] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:18.305 [2024-07-15 16:46:24.778781] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:18.305 [2024-07-15 16:46:24.778787] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4092611 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 4092611 ']' 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 4092611 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4092611 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4092611' 00:04:18.305 killing process with pid 4092611 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 4092611 00:04:18.305 16:46:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 4092611 00:04:18.564 00:04:18.564 real 0m1.433s 00:04:18.564 user 0m1.675s 00:04:18.564 sys 0m0.357s 00:04:18.564 16:46:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.564 16:46:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:18.564 ************************************ 00:04:18.564 END TEST exit_on_failed_rpc_init 00:04:18.564 ************************************ 00:04:18.564 16:46:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:18.564 16:46:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:18.823 00:04:18.823 real 0m13.949s 00:04:18.823 user 0m13.583s 00:04:18.823 sys 0m1.433s 00:04:18.823 16:46:25 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.823 16:46:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.823 ************************************ 00:04:18.823 END TEST skip_rpc 00:04:18.823 ************************************ 00:04:18.823 16:46:25 -- common/autotest_common.sh@1142 -- # return 0 00:04:18.823 16:46:25 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:18.823 16:46:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.823 16:46:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.824 16:46:25 -- common/autotest_common.sh@10 -- # set +x 00:04:18.824 ************************************ 00:04:18.824 START TEST rpc_client 00:04:18.824 ************************************ 00:04:18.824 16:46:25 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:18.824 * Looking for test storage... 00:04:18.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:18.824 16:46:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:18.824 OK 00:04:18.824 16:46:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:18.824 00:04:18.824 real 0m0.099s 00:04:18.824 user 0m0.041s 00:04:18.824 sys 0m0.065s 00:04:18.824 16:46:25 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.824 16:46:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:18.824 ************************************ 00:04:18.824 END TEST rpc_client 00:04:18.824 ************************************ 00:04:18.824 16:46:25 -- common/autotest_common.sh@1142 -- # return 0 00:04:18.824 16:46:25 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:18.824 16:46:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.824 16:46:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.824 16:46:25 -- common/autotest_common.sh@10 -- # set +x 00:04:18.824 ************************************ 00:04:18.824 START TEST json_config 00:04:18.824 ************************************ 00:04:18.824 16:46:25 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:19.082 16:46:25 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.082 16:46:25 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:19.082 16:46:25 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.082 16:46:25 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.082 16:46:25 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.082 16:46:25 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.082 16:46:25 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.082 16:46:25 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.082 16:46:25 json_config -- paths/export.sh@5 -- # export PATH 00:04:19.083 16:46:25 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.083 16:46:25 json_config -- nvmf/common.sh@47 -- # : 0 00:04:19.083 16:46:25 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:19.083 16:46:25 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:19.083 16:46:25 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.083 16:46:25 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.083 16:46:25 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.083 16:46:25 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:19.083 16:46:25 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:19.083 16:46:25 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:19.083 INFO: JSON configuration test init 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:19.083 16:46:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.083 16:46:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:19.083 16:46:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.083 16:46:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.083 16:46:25 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:19.083 16:46:25 json_config -- json_config/common.sh@9 -- # local app=target 00:04:19.083 16:46:25 json_config -- json_config/common.sh@10 -- # shift 00:04:19.083 16:46:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:19.083 16:46:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:19.083 16:46:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:19.083 16:46:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.083 16:46:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.083 16:46:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4093039 00:04:19.083 16:46:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:19.083 16:46:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:19.083 Waiting for target to run... 00:04:19.083 16:46:25 json_config -- json_config/common.sh@25 -- # waitforlisten 4093039 /var/tmp/spdk_tgt.sock 00:04:19.083 16:46:25 json_config -- common/autotest_common.sh@829 -- # '[' -z 4093039 ']' 00:04:19.083 16:46:25 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.083 16:46:25 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.083 16:46:25 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.083 16:46:25 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.083 16:46:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.083 [2024-07-15 16:46:25.598839] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:19.083 [2024-07-15 16:46:25.598889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4093039 ] 00:04:19.083 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.341 [2024-07-15 16:46:25.867550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.341 [2024-07-15 16:46:25.935269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.912 16:46:26 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:19.912 16:46:26 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:19.912 16:46:26 json_config -- json_config/common.sh@26 -- # echo '' 00:04:19.912 00:04:19.912 16:46:26 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:19.912 16:46:26 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:19.912 16:46:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.912 16:46:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.912 16:46:26 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:19.912 16:46:26 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:19.912 16:46:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:19.912 16:46:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.912 16:46:26 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:19.912 16:46:26 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:19.912 16:46:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:23.199 16:46:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:23.199 16:46:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:23.199 16:46:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:23.199 16:46:29 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:23.199 16:46:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:23.199 16:46:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:23.199 16:46:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:23.199 16:46:29 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:23.199 16:46:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:23.456 MallocForNvmf0 00:04:23.456 16:46:29 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:23.456 16:46:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:23.456 MallocForNvmf1 00:04:23.456 16:46:30 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:23.456 16:46:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:23.714 [2024-07-15 16:46:30.262277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.714 16:46:30 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:23.714 16:46:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:23.973 16:46:30 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:23.973 16:46:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:23.973 16:46:30 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:23.973 16:46:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:24.230 16:46:30 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:24.230 16:46:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:24.488 [2024-07-15 16:46:30.928413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:24.488 16:46:30 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:24.488 16:46:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.488 16:46:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.488 16:46:30 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:24.488 16:46:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.488 16:46:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.488 16:46:30 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:24.488 16:46:30 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:24.488 16:46:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:24.746 MallocBdevForConfigChangeCheck 00:04:24.746 16:46:31 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:24.746 16:46:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.746 16:46:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.746 16:46:31 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:24.746 16:46:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.006 16:46:31 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:25.006 INFO: shutting down applications... 00:04:25.006 16:46:31 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:25.006 16:46:31 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:25.006 16:46:31 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:25.006 16:46:31 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:26.379 Calling clear_iscsi_subsystem 00:04:26.379 Calling clear_nvmf_subsystem 00:04:26.380 Calling clear_nbd_subsystem 00:04:26.380 Calling clear_ublk_subsystem 00:04:26.380 Calling clear_vhost_blk_subsystem 00:04:26.380 Calling clear_vhost_scsi_subsystem 00:04:26.380 Calling clear_bdev_subsystem 00:04:26.640 16:46:33 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:26.640 16:46:33 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:26.640 16:46:33 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:26.640 16:46:33 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.640 16:46:33 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:26.640 16:46:33 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:26.899 16:46:33 json_config -- json_config/json_config.sh@345 -- # break 00:04:26.899 16:46:33 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:26.899 16:46:33 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:26.899 16:46:33 json_config -- json_config/common.sh@31 -- # local app=target 00:04:26.899 16:46:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:26.899 16:46:33 json_config -- json_config/common.sh@35 -- # [[ -n 4093039 ]] 00:04:26.899 16:46:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4093039 00:04:26.899 16:46:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:26.899 16:46:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.899 16:46:33 json_config -- json_config/common.sh@41 -- # kill -0 4093039 00:04:26.899 16:46:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:27.495 16:46:33 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:27.495 16:46:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.495 16:46:33 json_config -- json_config/common.sh@41 -- # kill -0 4093039 00:04:27.495 16:46:33 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:27.495 16:46:33 json_config -- json_config/common.sh@43 -- # break 00:04:27.495 16:46:33 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:27.495 16:46:33 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:27.495 SPDK target shutdown done 00:04:27.495 16:46:33 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:27.495 INFO: relaunching applications... 00:04:27.495 16:46:33 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.495 16:46:33 json_config -- json_config/common.sh@9 -- # local app=target 00:04:27.495 16:46:33 json_config -- json_config/common.sh@10 -- # shift 00:04:27.495 16:46:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.495 16:46:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.495 16:46:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.495 16:46:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.495 16:46:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.495 16:46:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4094596 00:04:27.495 16:46:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.495 Waiting for target to run... 00:04:27.495 16:46:33 json_config -- json_config/common.sh@25 -- # waitforlisten 4094596 /var/tmp/spdk_tgt.sock 00:04:27.495 16:46:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.495 16:46:33 json_config -- common/autotest_common.sh@829 -- # '[' -z 4094596 ']' 00:04:27.495 16:46:33 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.495 16:46:33 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.495 16:46:33 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.495 16:46:33 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.495 16:46:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.495 [2024-07-15 16:46:33.946001] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:27.495 [2024-07-15 16:46:33.946059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4094596 ] 00:04:27.495 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.756 [2024-07-15 16:46:34.234742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.756 [2024-07-15 16:46:34.302941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.043 [2024-07-15 16:46:37.318753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.043 [2024-07-15 16:46:37.351075] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.043 16:46:37 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.043 16:46:37 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:31.043 16:46:37 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.043 00:04:31.043 16:46:37 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:31.043 16:46:37 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:31.043 INFO: Checking if target configuration is the same... 00:04:31.043 16:46:37 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.043 16:46:37 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:31.043 16:46:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.043 + '[' 2 -ne 2 ']' 00:04:31.043 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:31.043 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:31.043 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:31.043 +++ basename /dev/fd/62 00:04:31.043 ++ mktemp /tmp/62.XXX 00:04:31.043 + tmp_file_1=/tmp/62.gug 00:04:31.043 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.043 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.043 + tmp_file_2=/tmp/spdk_tgt_config.json.rRL 00:04:31.043 + ret=0 00:04:31.043 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.043 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.301 + diff -u /tmp/62.gug /tmp/spdk_tgt_config.json.rRL 00:04:31.301 + echo 'INFO: JSON config files are the same' 00:04:31.301 INFO: JSON config files are the same 00:04:31.301 + rm /tmp/62.gug /tmp/spdk_tgt_config.json.rRL 00:04:31.301 + exit 0 00:04:31.301 16:46:37 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:31.301 16:46:37 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:31.301 INFO: changing configuration and checking if this can be detected... 00:04:31.301 16:46:37 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.301 16:46:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.301 16:46:37 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.301 16:46:37 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:31.301 16:46:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.301 + '[' 2 -ne 2 ']' 00:04:31.301 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:31.301 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:31.301 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:31.301 +++ basename /dev/fd/62 00:04:31.301 ++ mktemp /tmp/62.XXX 00:04:31.301 + tmp_file_1=/tmp/62.HBK 00:04:31.301 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.301 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.301 + tmp_file_2=/tmp/spdk_tgt_config.json.Y7k 00:04:31.301 + ret=0 00:04:31.301 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.870 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.870 + diff -u /tmp/62.HBK /tmp/spdk_tgt_config.json.Y7k 00:04:31.870 + ret=1 00:04:31.870 + echo '=== Start of file: /tmp/62.HBK ===' 00:04:31.870 + cat /tmp/62.HBK 00:04:31.870 + echo '=== End of file: /tmp/62.HBK ===' 00:04:31.870 + echo '' 00:04:31.870 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Y7k ===' 00:04:31.870 + cat /tmp/spdk_tgt_config.json.Y7k 00:04:31.870 + echo '=== End of file: /tmp/spdk_tgt_config.json.Y7k ===' 00:04:31.870 + echo '' 00:04:31.870 + rm /tmp/62.HBK /tmp/spdk_tgt_config.json.Y7k 00:04:31.870 + exit 1 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:31.870 INFO: configuration change detected. 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@317 -- # [[ -n 4094596 ]] 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.870 16:46:38 json_config -- json_config/json_config.sh@323 -- # killprocess 4094596 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@948 -- # '[' -z 4094596 ']' 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@952 -- # kill -0 4094596 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@953 -- # uname 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4094596 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4094596' 00:04:31.870 killing process with pid 4094596 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@967 -- # kill 4094596 00:04:31.870 16:46:38 json_config -- common/autotest_common.sh@972 -- # wait 4094596 00:04:33.249 16:46:39 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.249 16:46:39 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:33.249 16:46:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.249 16:46:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.249 16:46:39 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:33.249 16:46:39 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:33.249 INFO: Success 00:04:33.249 00:04:33.249 real 0m14.451s 00:04:33.249 user 0m15.298s 00:04:33.249 sys 0m1.639s 00:04:33.249 16:46:39 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.249 16:46:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.249 ************************************ 00:04:33.249 END TEST json_config 00:04:33.249 ************************************ 00:04:33.508 16:46:39 -- common/autotest_common.sh@1142 -- # return 0 00:04:33.508 16:46:39 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:33.508 16:46:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.508 16:46:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.508 16:46:39 -- common/autotest_common.sh@10 -- # set +x 00:04:33.508 ************************************ 00:04:33.508 START TEST json_config_extra_key 00:04:33.508 ************************************ 00:04:33.508 16:46:39 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:33.508 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:33.508 16:46:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:33.509 16:46:40 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:33.509 16:46:40 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.509 16:46:40 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.509 16:46:40 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.509 16:46:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.509 16:46:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.509 16:46:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.509 16:46:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:33.509 16:46:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.509 16:46:40 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:33.509 16:46:40 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:33.509 16:46:40 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:33.509 16:46:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:33.509 16:46:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.509 16:46:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.509 16:46:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:33.509 16:46:40 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:33.509 16:46:40 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:33.509 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:33.509 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:33.509 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:33.509 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:33.509 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:33.509 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:33.509 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:33.509 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:33.509 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:33.509 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:33.509 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:33.509 INFO: launching applications... 00:04:33.509 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:33.509 16:46:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:33.509 16:46:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:33.509 16:46:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.509 16:46:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.509 16:46:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.509 16:46:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.509 16:46:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.509 16:46:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4095733 00:04:33.509 16:46:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.509 Waiting for target to run... 00:04:33.509 16:46:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4095733 /var/tmp/spdk_tgt.sock 00:04:33.509 16:46:40 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 4095733 ']' 00:04:33.509 16:46:40 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:33.509 16:46:40 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.509 16:46:40 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.509 16:46:40 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.509 16:46:40 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.509 16:46:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:33.509 [2024-07-15 16:46:40.121172] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:33.509 [2024-07-15 16:46:40.121219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4095733 ] 00:04:33.509 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.077 [2024-07-15 16:46:40.556306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.077 [2024-07-15 16:46:40.643669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.335 16:46:40 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.335 16:46:40 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:34.335 16:46:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:34.335 00:04:34.336 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:34.336 INFO: shutting down applications... 00:04:34.336 16:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:34.336 16:46:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:34.336 16:46:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:34.336 16:46:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4095733 ]] 00:04:34.336 16:46:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4095733 00:04:34.336 16:46:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:34.336 16:46:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.336 16:46:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4095733 00:04:34.336 16:46:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.904 16:46:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.904 16:46:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.904 16:46:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4095733 00:04:34.904 16:46:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.904 16:46:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:34.904 16:46:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.904 16:46:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.904 SPDK target shutdown done 00:04:34.904 16:46:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:34.904 Success 00:04:34.904 00:04:34.904 real 0m1.452s 00:04:34.904 user 0m1.079s 00:04:34.904 sys 0m0.527s 00:04:34.904 16:46:41 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.904 16:46:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.904 ************************************ 00:04:34.904 END TEST json_config_extra_key 00:04:34.904 ************************************ 00:04:34.904 16:46:41 -- common/autotest_common.sh@1142 -- # return 0 00:04:34.904 16:46:41 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.904 16:46:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.904 16:46:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.904 16:46:41 -- common/autotest_common.sh@10 -- # set +x 00:04:34.904 ************************************ 00:04:34.904 START TEST alias_rpc 00:04:34.904 ************************************ 00:04:34.904 16:46:41 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.904 * Looking for test storage... 00:04:35.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:35.163 16:46:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:35.163 16:46:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.163 16:46:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4096012 00:04:35.163 16:46:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4096012 00:04:35.163 16:46:41 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 4096012 ']' 00:04:35.163 16:46:41 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.163 16:46:41 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.163 16:46:41 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.163 16:46:41 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.163 16:46:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.163 [2024-07-15 16:46:41.621525] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:35.163 [2024-07-15 16:46:41.621579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096012 ] 00:04:35.163 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.163 [2024-07-15 16:46:41.673584] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.163 [2024-07-15 16:46:41.753038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.422 16:46:41 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.422 16:46:41 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:35.422 16:46:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:35.681 16:46:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4096012 00:04:35.681 16:46:42 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 4096012 ']' 00:04:35.681 16:46:42 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 4096012 00:04:35.681 16:46:42 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:35.681 16:46:42 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.681 16:46:42 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096012 00:04:35.681 16:46:42 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.681 16:46:42 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.681 16:46:42 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096012' 00:04:35.681 killing process with pid 4096012 00:04:35.681 16:46:42 alias_rpc -- common/autotest_common.sh@967 -- # kill 4096012 00:04:35.681 16:46:42 alias_rpc -- common/autotest_common.sh@972 -- # wait 4096012 00:04:35.940 00:04:35.940 real 0m1.010s 00:04:35.940 user 0m1.055s 00:04:35.940 sys 0m0.358s 00:04:35.940 16:46:42 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.940 16:46:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.940 ************************************ 00:04:35.940 END TEST alias_rpc 00:04:35.940 ************************************ 00:04:35.940 16:46:42 -- common/autotest_common.sh@1142 -- # return 0 00:04:35.940 16:46:42 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:35.940 16:46:42 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:35.940 16:46:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.940 16:46:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.940 16:46:42 -- common/autotest_common.sh@10 -- # set +x 00:04:35.940 ************************************ 00:04:35.940 START TEST spdkcli_tcp 00:04:35.940 ************************************ 00:04:35.940 16:46:42 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:36.199 * Looking for test storage... 00:04:36.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:36.199 16:46:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:36.199 16:46:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:36.199 16:46:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:36.199 16:46:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:36.199 16:46:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:36.199 16:46:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:36.199 16:46:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:36.199 16:46:42 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.199 16:46:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.199 16:46:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4096291 00:04:36.199 16:46:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4096291 00:04:36.199 16:46:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:36.199 16:46:42 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 4096291 ']' 00:04:36.200 16:46:42 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.200 16:46:42 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.200 16:46:42 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.200 16:46:42 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.200 16:46:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.200 [2024-07-15 16:46:42.712967] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:36.200 [2024-07-15 16:46:42.713008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096291 ] 00:04:36.200 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.200 [2024-07-15 16:46:42.766339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.200 [2024-07-15 16:46:42.841234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.200 [2024-07-15 16:46:42.841235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.135 16:46:43 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.135 16:46:43 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:37.135 16:46:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4096484 00:04:37.135 16:46:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:37.135 16:46:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:37.135 [ 00:04:37.135 "bdev_malloc_delete", 00:04:37.135 "bdev_malloc_create", 00:04:37.135 "bdev_null_resize", 00:04:37.135 "bdev_null_delete", 00:04:37.135 "bdev_null_create", 00:04:37.135 "bdev_nvme_cuse_unregister", 00:04:37.135 "bdev_nvme_cuse_register", 00:04:37.135 "bdev_opal_new_user", 00:04:37.135 "bdev_opal_set_lock_state", 00:04:37.135 "bdev_opal_delete", 00:04:37.135 "bdev_opal_get_info", 00:04:37.135 "bdev_opal_create", 00:04:37.135 "bdev_nvme_opal_revert", 00:04:37.135 "bdev_nvme_opal_init", 00:04:37.135 "bdev_nvme_send_cmd", 00:04:37.135 "bdev_nvme_get_path_iostat", 00:04:37.135 "bdev_nvme_get_mdns_discovery_info", 00:04:37.135 "bdev_nvme_stop_mdns_discovery", 00:04:37.135 "bdev_nvme_start_mdns_discovery", 00:04:37.135 "bdev_nvme_set_multipath_policy", 00:04:37.135 "bdev_nvme_set_preferred_path", 00:04:37.135 "bdev_nvme_get_io_paths", 00:04:37.135 "bdev_nvme_remove_error_injection", 00:04:37.135 "bdev_nvme_add_error_injection", 00:04:37.136 "bdev_nvme_get_discovery_info", 00:04:37.136 "bdev_nvme_stop_discovery", 00:04:37.136 "bdev_nvme_start_discovery", 00:04:37.136 "bdev_nvme_get_controller_health_info", 00:04:37.136 "bdev_nvme_disable_controller", 00:04:37.136 "bdev_nvme_enable_controller", 00:04:37.136 "bdev_nvme_reset_controller", 00:04:37.136 "bdev_nvme_get_transport_statistics", 00:04:37.136 "bdev_nvme_apply_firmware", 00:04:37.136 "bdev_nvme_detach_controller", 00:04:37.136 "bdev_nvme_get_controllers", 00:04:37.136 "bdev_nvme_attach_controller", 00:04:37.136 "bdev_nvme_set_hotplug", 00:04:37.136 "bdev_nvme_set_options", 00:04:37.136 "bdev_passthru_delete", 00:04:37.136 "bdev_passthru_create", 00:04:37.136 "bdev_lvol_set_parent_bdev", 00:04:37.136 "bdev_lvol_set_parent", 00:04:37.136 "bdev_lvol_check_shallow_copy", 00:04:37.136 "bdev_lvol_start_shallow_copy", 00:04:37.136 "bdev_lvol_grow_lvstore", 00:04:37.136 "bdev_lvol_get_lvols", 00:04:37.136 "bdev_lvol_get_lvstores", 00:04:37.136 "bdev_lvol_delete", 00:04:37.136 "bdev_lvol_set_read_only", 00:04:37.136 "bdev_lvol_resize", 00:04:37.136 "bdev_lvol_decouple_parent", 00:04:37.136 "bdev_lvol_inflate", 00:04:37.136 "bdev_lvol_rename", 00:04:37.136 "bdev_lvol_clone_bdev", 00:04:37.136 "bdev_lvol_clone", 00:04:37.136 "bdev_lvol_snapshot", 00:04:37.136 "bdev_lvol_create", 00:04:37.136 "bdev_lvol_delete_lvstore", 00:04:37.136 "bdev_lvol_rename_lvstore", 00:04:37.136 "bdev_lvol_create_lvstore", 00:04:37.136 "bdev_raid_set_options", 00:04:37.136 "bdev_raid_remove_base_bdev", 00:04:37.136 "bdev_raid_add_base_bdev", 00:04:37.136 "bdev_raid_delete", 00:04:37.136 "bdev_raid_create", 00:04:37.136 "bdev_raid_get_bdevs", 00:04:37.136 "bdev_error_inject_error", 00:04:37.136 "bdev_error_delete", 00:04:37.136 "bdev_error_create", 00:04:37.136 "bdev_split_delete", 00:04:37.136 "bdev_split_create", 00:04:37.136 "bdev_delay_delete", 00:04:37.136 "bdev_delay_create", 00:04:37.136 "bdev_delay_update_latency", 00:04:37.136 "bdev_zone_block_delete", 00:04:37.136 "bdev_zone_block_create", 00:04:37.136 "blobfs_create", 00:04:37.136 "blobfs_detect", 00:04:37.136 "blobfs_set_cache_size", 00:04:37.136 "bdev_aio_delete", 00:04:37.136 "bdev_aio_rescan", 00:04:37.136 "bdev_aio_create", 00:04:37.136 "bdev_ftl_set_property", 00:04:37.136 "bdev_ftl_get_properties", 00:04:37.136 "bdev_ftl_get_stats", 00:04:37.136 "bdev_ftl_unmap", 00:04:37.136 "bdev_ftl_unload", 00:04:37.136 "bdev_ftl_delete", 00:04:37.136 "bdev_ftl_load", 00:04:37.136 "bdev_ftl_create", 00:04:37.136 "bdev_virtio_attach_controller", 00:04:37.136 "bdev_virtio_scsi_get_devices", 00:04:37.136 "bdev_virtio_detach_controller", 00:04:37.136 "bdev_virtio_blk_set_hotplug", 00:04:37.136 "bdev_iscsi_delete", 00:04:37.136 "bdev_iscsi_create", 00:04:37.136 "bdev_iscsi_set_options", 00:04:37.136 "accel_error_inject_error", 00:04:37.136 "ioat_scan_accel_module", 00:04:37.136 "dsa_scan_accel_module", 00:04:37.136 "iaa_scan_accel_module", 00:04:37.136 "vfu_virtio_create_scsi_endpoint", 00:04:37.136 "vfu_virtio_scsi_remove_target", 00:04:37.136 "vfu_virtio_scsi_add_target", 00:04:37.136 "vfu_virtio_create_blk_endpoint", 00:04:37.136 "vfu_virtio_delete_endpoint", 00:04:37.136 "keyring_file_remove_key", 00:04:37.136 "keyring_file_add_key", 00:04:37.136 "keyring_linux_set_options", 00:04:37.136 "iscsi_get_histogram", 00:04:37.136 "iscsi_enable_histogram", 00:04:37.136 "iscsi_set_options", 00:04:37.136 "iscsi_get_auth_groups", 00:04:37.136 "iscsi_auth_group_remove_secret", 00:04:37.136 "iscsi_auth_group_add_secret", 00:04:37.136 "iscsi_delete_auth_group", 00:04:37.136 "iscsi_create_auth_group", 00:04:37.136 "iscsi_set_discovery_auth", 00:04:37.136 "iscsi_get_options", 00:04:37.136 "iscsi_target_node_request_logout", 00:04:37.136 "iscsi_target_node_set_redirect", 00:04:37.136 "iscsi_target_node_set_auth", 00:04:37.136 "iscsi_target_node_add_lun", 00:04:37.136 "iscsi_get_stats", 00:04:37.136 "iscsi_get_connections", 00:04:37.136 "iscsi_portal_group_set_auth", 00:04:37.136 "iscsi_start_portal_group", 00:04:37.136 "iscsi_delete_portal_group", 00:04:37.136 "iscsi_create_portal_group", 00:04:37.136 "iscsi_get_portal_groups", 00:04:37.136 "iscsi_delete_target_node", 00:04:37.136 "iscsi_target_node_remove_pg_ig_maps", 00:04:37.136 "iscsi_target_node_add_pg_ig_maps", 00:04:37.136 "iscsi_create_target_node", 00:04:37.136 "iscsi_get_target_nodes", 00:04:37.136 "iscsi_delete_initiator_group", 00:04:37.136 "iscsi_initiator_group_remove_initiators", 00:04:37.136 "iscsi_initiator_group_add_initiators", 00:04:37.136 "iscsi_create_initiator_group", 00:04:37.136 "iscsi_get_initiator_groups", 00:04:37.136 "nvmf_set_crdt", 00:04:37.136 "nvmf_set_config", 00:04:37.136 "nvmf_set_max_subsystems", 00:04:37.136 "nvmf_stop_mdns_prr", 00:04:37.136 "nvmf_publish_mdns_prr", 00:04:37.136 "nvmf_subsystem_get_listeners", 00:04:37.136 "nvmf_subsystem_get_qpairs", 00:04:37.136 "nvmf_subsystem_get_controllers", 00:04:37.136 "nvmf_get_stats", 00:04:37.136 "nvmf_get_transports", 00:04:37.136 "nvmf_create_transport", 00:04:37.136 "nvmf_get_targets", 00:04:37.136 "nvmf_delete_target", 00:04:37.136 "nvmf_create_target", 00:04:37.136 "nvmf_subsystem_allow_any_host", 00:04:37.136 "nvmf_subsystem_remove_host", 00:04:37.136 "nvmf_subsystem_add_host", 00:04:37.136 "nvmf_ns_remove_host", 00:04:37.136 "nvmf_ns_add_host", 00:04:37.136 "nvmf_subsystem_remove_ns", 00:04:37.136 "nvmf_subsystem_add_ns", 00:04:37.136 "nvmf_subsystem_listener_set_ana_state", 00:04:37.136 "nvmf_discovery_get_referrals", 00:04:37.136 "nvmf_discovery_remove_referral", 00:04:37.136 "nvmf_discovery_add_referral", 00:04:37.136 "nvmf_subsystem_remove_listener", 00:04:37.136 "nvmf_subsystem_add_listener", 00:04:37.136 "nvmf_delete_subsystem", 00:04:37.136 "nvmf_create_subsystem", 00:04:37.136 "nvmf_get_subsystems", 00:04:37.136 "env_dpdk_get_mem_stats", 00:04:37.136 "nbd_get_disks", 00:04:37.136 "nbd_stop_disk", 00:04:37.136 "nbd_start_disk", 00:04:37.136 "ublk_recover_disk", 00:04:37.136 "ublk_get_disks", 00:04:37.136 "ublk_stop_disk", 00:04:37.136 "ublk_start_disk", 00:04:37.136 "ublk_destroy_target", 00:04:37.136 "ublk_create_target", 00:04:37.136 "virtio_blk_create_transport", 00:04:37.136 "virtio_blk_get_transports", 00:04:37.136 "vhost_controller_set_coalescing", 00:04:37.136 "vhost_get_controllers", 00:04:37.136 "vhost_delete_controller", 00:04:37.136 "vhost_create_blk_controller", 00:04:37.136 "vhost_scsi_controller_remove_target", 00:04:37.136 "vhost_scsi_controller_add_target", 00:04:37.136 "vhost_start_scsi_controller", 00:04:37.136 "vhost_create_scsi_controller", 00:04:37.136 "thread_set_cpumask", 00:04:37.136 "framework_get_governor", 00:04:37.136 "framework_get_scheduler", 00:04:37.136 "framework_set_scheduler", 00:04:37.136 "framework_get_reactors", 00:04:37.136 "thread_get_io_channels", 00:04:37.136 "thread_get_pollers", 00:04:37.136 "thread_get_stats", 00:04:37.136 "framework_monitor_context_switch", 00:04:37.136 "spdk_kill_instance", 00:04:37.136 "log_enable_timestamps", 00:04:37.136 "log_get_flags", 00:04:37.136 "log_clear_flag", 00:04:37.136 "log_set_flag", 00:04:37.136 "log_get_level", 00:04:37.136 "log_set_level", 00:04:37.136 "log_get_print_level", 00:04:37.136 "log_set_print_level", 00:04:37.136 "framework_enable_cpumask_locks", 00:04:37.136 "framework_disable_cpumask_locks", 00:04:37.136 "framework_wait_init", 00:04:37.136 "framework_start_init", 00:04:37.136 "scsi_get_devices", 00:04:37.136 "bdev_get_histogram", 00:04:37.136 "bdev_enable_histogram", 00:04:37.136 "bdev_set_qos_limit", 00:04:37.136 "bdev_set_qd_sampling_period", 00:04:37.136 "bdev_get_bdevs", 00:04:37.136 "bdev_reset_iostat", 00:04:37.136 "bdev_get_iostat", 00:04:37.136 "bdev_examine", 00:04:37.136 "bdev_wait_for_examine", 00:04:37.136 "bdev_set_options", 00:04:37.136 "notify_get_notifications", 00:04:37.136 "notify_get_types", 00:04:37.136 "accel_get_stats", 00:04:37.136 "accel_set_options", 00:04:37.136 "accel_set_driver", 00:04:37.136 "accel_crypto_key_destroy", 00:04:37.136 "accel_crypto_keys_get", 00:04:37.136 "accel_crypto_key_create", 00:04:37.136 "accel_assign_opc", 00:04:37.136 "accel_get_module_info", 00:04:37.136 "accel_get_opc_assignments", 00:04:37.136 "vmd_rescan", 00:04:37.136 "vmd_remove_device", 00:04:37.136 "vmd_enable", 00:04:37.136 "sock_get_default_impl", 00:04:37.136 "sock_set_default_impl", 00:04:37.136 "sock_impl_set_options", 00:04:37.136 "sock_impl_get_options", 00:04:37.136 "iobuf_get_stats", 00:04:37.136 "iobuf_set_options", 00:04:37.136 "keyring_get_keys", 00:04:37.136 "framework_get_pci_devices", 00:04:37.136 "framework_get_config", 00:04:37.136 "framework_get_subsystems", 00:04:37.136 "vfu_tgt_set_base_path", 00:04:37.136 "trace_get_info", 00:04:37.136 "trace_get_tpoint_group_mask", 00:04:37.136 "trace_disable_tpoint_group", 00:04:37.136 "trace_enable_tpoint_group", 00:04:37.136 "trace_clear_tpoint_mask", 00:04:37.136 "trace_set_tpoint_mask", 00:04:37.136 "spdk_get_version", 00:04:37.136 "rpc_get_methods" 00:04:37.136 ] 00:04:37.136 16:46:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:37.136 16:46:43 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:37.136 16:46:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.136 16:46:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:37.136 16:46:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4096291 00:04:37.136 16:46:43 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 4096291 ']' 00:04:37.136 16:46:43 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 4096291 00:04:37.136 16:46:43 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:37.136 16:46:43 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.136 16:46:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096291 00:04:37.136 16:46:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:37.136 16:46:43 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:37.136 16:46:43 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096291' 00:04:37.136 killing process with pid 4096291 00:04:37.136 16:46:43 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 4096291 00:04:37.136 16:46:43 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 4096291 00:04:37.703 00:04:37.703 real 0m1.508s 00:04:37.703 user 0m2.825s 00:04:37.703 sys 0m0.413s 00:04:37.703 16:46:44 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.703 16:46:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.703 ************************************ 00:04:37.703 END TEST spdkcli_tcp 00:04:37.703 ************************************ 00:04:37.703 16:46:44 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.703 16:46:44 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:37.703 16:46:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.703 16:46:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.703 16:46:44 -- common/autotest_common.sh@10 -- # set +x 00:04:37.703 ************************************ 00:04:37.703 START TEST dpdk_mem_utility 00:04:37.703 ************************************ 00:04:37.703 16:46:44 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:37.703 * Looking for test storage... 00:04:37.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:37.703 16:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:37.703 16:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4096600 00:04:37.703 16:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4096600 00:04:37.703 16:46:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.703 16:46:44 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 4096600 ']' 00:04:37.703 16:46:44 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.703 16:46:44 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.703 16:46:44 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.703 16:46:44 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.703 16:46:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.703 [2024-07-15 16:46:44.271767] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:37.703 [2024-07-15 16:46:44.271816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096600 ] 00:04:37.703 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.703 [2024-07-15 16:46:44.325043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.961 [2024-07-15 16:46:44.400950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.529 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.529 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:38.529 16:46:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:38.529 16:46:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:38.529 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.529 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.529 { 00:04:38.529 "filename": "/tmp/spdk_mem_dump.txt" 00:04:38.529 } 00:04:38.529 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.529 16:46:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:38.529 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:38.529 1 heaps totaling size 814.000000 MiB 00:04:38.529 size: 814.000000 MiB heap id: 0 00:04:38.529 end heaps---------- 00:04:38.529 8 mempools totaling size 598.116089 MiB 00:04:38.529 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:38.529 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:38.529 size: 84.521057 MiB name: bdev_io_4096600 00:04:38.529 size: 51.011292 MiB name: evtpool_4096600 00:04:38.529 size: 50.003479 MiB name: msgpool_4096600 00:04:38.529 size: 21.763794 MiB name: PDU_Pool 00:04:38.529 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:38.529 size: 0.026123 MiB name: Session_Pool 00:04:38.529 end mempools------- 00:04:38.529 6 memzones totaling size 4.142822 MiB 00:04:38.529 size: 1.000366 MiB name: RG_ring_0_4096600 00:04:38.529 size: 1.000366 MiB name: RG_ring_1_4096600 00:04:38.529 size: 1.000366 MiB name: RG_ring_4_4096600 00:04:38.529 size: 1.000366 MiB name: RG_ring_5_4096600 00:04:38.529 size: 0.125366 MiB name: RG_ring_2_4096600 00:04:38.529 size: 0.015991 MiB name: RG_ring_3_4096600 00:04:38.529 end memzones------- 00:04:38.529 16:46:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:38.529 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:38.529 list of free elements. size: 12.519348 MiB 00:04:38.529 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:38.529 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:38.529 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:38.529 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:38.529 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:38.529 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:38.529 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:38.529 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:38.529 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:38.529 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:38.529 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:38.529 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:38.529 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:38.529 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:38.529 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:38.529 list of standard malloc elements. size: 199.218079 MiB 00:04:38.529 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:38.529 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:38.529 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:38.529 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:38.529 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:38.529 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:38.529 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:38.529 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:38.529 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:38.529 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:38.529 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:38.529 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:38.529 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:38.529 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:38.529 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:38.529 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:38.529 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:38.530 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:38.530 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:38.530 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:38.530 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:38.530 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:38.530 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:38.530 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:38.530 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:38.530 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:38.530 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:38.530 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:38.530 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:38.530 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:38.530 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:38.530 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:38.530 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:38.530 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:38.530 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:38.530 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:38.530 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:38.530 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:38.530 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:38.530 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:38.530 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:38.530 list of memzone associated elements. size: 602.262573 MiB 00:04:38.530 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:38.530 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:38.530 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:38.530 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:38.530 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:38.530 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4096600_0 00:04:38.530 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:38.530 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4096600_0 00:04:38.530 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:38.530 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4096600_0 00:04:38.530 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:38.530 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:38.530 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:38.530 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:38.530 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:38.530 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4096600 00:04:38.530 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:38.530 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4096600 00:04:38.530 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:38.530 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4096600 00:04:38.530 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:38.530 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:38.530 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:38.530 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:38.530 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:38.530 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:38.530 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:38.530 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:38.530 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:38.530 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4096600 00:04:38.530 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:38.530 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4096600 00:04:38.530 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:38.530 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4096600 00:04:38.530 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:38.530 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4096600 00:04:38.530 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:38.530 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4096600 00:04:38.530 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:38.530 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:38.530 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:38.530 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:38.530 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:38.530 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:38.530 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:38.530 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4096600 00:04:38.530 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:38.530 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:38.530 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:38.530 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:38.530 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:38.530 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4096600 00:04:38.530 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:38.530 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:38.530 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:38.530 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4096600 00:04:38.530 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:38.530 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4096600 00:04:38.530 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:38.530 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:38.530 16:46:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:38.530 16:46:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4096600 00:04:38.530 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 4096600 ']' 00:04:38.530 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 4096600 00:04:38.530 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:38.530 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:38.530 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096600 00:04:38.530 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:38.530 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:38.530 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096600' 00:04:38.530 killing process with pid 4096600 00:04:38.530 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 4096600 00:04:38.530 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 4096600 00:04:39.098 00:04:39.098 real 0m1.355s 00:04:39.098 user 0m1.425s 00:04:39.098 sys 0m0.369s 00:04:39.098 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.098 16:46:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.098 ************************************ 00:04:39.098 END TEST dpdk_mem_utility 00:04:39.098 ************************************ 00:04:39.098 16:46:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:39.098 16:46:45 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:39.098 16:46:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.098 16:46:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.098 16:46:45 -- common/autotest_common.sh@10 -- # set +x 00:04:39.098 ************************************ 00:04:39.098 START TEST event 00:04:39.098 ************************************ 00:04:39.098 16:46:45 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:39.098 * Looking for test storage... 00:04:39.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:39.098 16:46:45 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:39.098 16:46:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:39.098 16:46:45 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:39.098 16:46:45 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:39.098 16:46:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.098 16:46:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.098 ************************************ 00:04:39.098 START TEST event_perf 00:04:39.098 ************************************ 00:04:39.098 16:46:45 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:39.098 Running I/O for 1 seconds...[2024-07-15 16:46:45.692152] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:39.098 [2024-07-15 16:46:45.692219] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096887 ] 00:04:39.098 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.098 [2024-07-15 16:46:45.750745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:39.357 [2024-07-15 16:46:45.826541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.357 [2024-07-15 16:46:45.826638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.357 [2024-07-15 16:46:45.826723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:39.357 [2024-07-15 16:46:45.826724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.294 Running I/O for 1 seconds... 00:04:40.294 lcore 0: 209644 00:04:40.294 lcore 1: 209644 00:04:40.294 lcore 2: 209644 00:04:40.294 lcore 3: 209643 00:04:40.294 done. 00:04:40.294 00:04:40.294 real 0m1.226s 00:04:40.294 user 0m4.140s 00:04:40.294 sys 0m0.082s 00:04:40.294 16:46:46 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.294 16:46:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.294 ************************************ 00:04:40.294 END TEST event_perf 00:04:40.294 ************************************ 00:04:40.294 16:46:46 event -- common/autotest_common.sh@1142 -- # return 0 00:04:40.294 16:46:46 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:40.294 16:46:46 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:40.294 16:46:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.294 16:46:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.294 ************************************ 00:04:40.294 START TEST event_reactor 00:04:40.294 ************************************ 00:04:40.294 16:46:46 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:40.553 [2024-07-15 16:46:46.979532] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:40.553 [2024-07-15 16:46:46.979604] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097137 ] 00:04:40.553 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.553 [2024-07-15 16:46:47.035611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.553 [2024-07-15 16:46:47.107365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.929 test_start 00:04:41.929 oneshot 00:04:41.929 tick 100 00:04:41.929 tick 100 00:04:41.929 tick 250 00:04:41.929 tick 100 00:04:41.929 tick 100 00:04:41.929 tick 250 00:04:41.929 tick 100 00:04:41.929 tick 500 00:04:41.929 tick 100 00:04:41.929 tick 100 00:04:41.929 tick 250 00:04:41.929 tick 100 00:04:41.929 tick 100 00:04:41.929 test_end 00:04:41.929 00:04:41.929 real 0m1.217s 00:04:41.929 user 0m1.147s 00:04:41.929 sys 0m0.066s 00:04:41.929 16:46:48 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.929 16:46:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:41.929 ************************************ 00:04:41.929 END TEST event_reactor 00:04:41.929 ************************************ 00:04:41.929 16:46:48 event -- common/autotest_common.sh@1142 -- # return 0 00:04:41.929 16:46:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.929 16:46:48 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:41.929 16:46:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.929 16:46:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.929 ************************************ 00:04:41.929 START TEST event_reactor_perf 00:04:41.929 ************************************ 00:04:41.929 16:46:48 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.929 [2024-07-15 16:46:48.257872] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:41.929 [2024-07-15 16:46:48.257936] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097391 ] 00:04:41.929 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.929 [2024-07-15 16:46:48.315477] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.929 [2024-07-15 16:46:48.386810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.864 test_start 00:04:42.864 test_end 00:04:42.864 Performance: 503431 events per second 00:04:42.864 00:04:42.864 real 0m1.218s 00:04:42.864 user 0m1.139s 00:04:42.864 sys 0m0.074s 00:04:42.864 16:46:49 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.864 16:46:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:42.864 ************************************ 00:04:42.864 END TEST event_reactor_perf 00:04:42.864 ************************************ 00:04:42.864 16:46:49 event -- common/autotest_common.sh@1142 -- # return 0 00:04:42.864 16:46:49 event -- event/event.sh@49 -- # uname -s 00:04:42.864 16:46:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:42.864 16:46:49 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:42.864 16:46:49 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.864 16:46:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.864 16:46:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.864 ************************************ 00:04:42.864 START TEST event_scheduler 00:04:42.864 ************************************ 00:04:42.864 16:46:49 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:43.123 * Looking for test storage... 00:04:43.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:43.123 16:46:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:43.123 16:46:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4097665 00:04:43.123 16:46:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.123 16:46:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:43.123 16:46:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4097665 00:04:43.123 16:46:49 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 4097665 ']' 00:04:43.123 16:46:49 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.123 16:46:49 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.123 16:46:49 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.123 16:46:49 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.123 16:46:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.123 [2024-07-15 16:46:49.638235] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:43.123 [2024-07-15 16:46:49.638285] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097665 ] 00:04:43.123 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.123 [2024-07-15 16:46:49.688554] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.123 [2024-07-15 16:46:49.771631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.123 [2024-07-15 16:46:49.771716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.123 [2024-07-15 16:46:49.771802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.123 [2024-07-15 16:46:49.771803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.097 16:46:50 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.097 16:46:50 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:44.097 16:46:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:44.097 16:46:50 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 [2024-07-15 16:46:50.466191] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:44.097 [2024-07-15 16:46:50.466219] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:44.097 [2024-07-15 16:46:50.466232] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:44.097 [2024-07-15 16:46:50.466238] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:44.097 [2024-07-15 16:46:50.466243] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:44.097 16:46:50 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.097 16:46:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:44.097 16:46:50 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 [2024-07-15 16:46:50.539659] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:44.097 16:46:50 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.097 16:46:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:44.097 16:46:50 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.097 16:46:50 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 ************************************ 00:04:44.097 START TEST scheduler_create_thread 00:04:44.097 ************************************ 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 2 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 3 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 4 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 5 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 6 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 7 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 8 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 9 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 10 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.097 16:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.663 16:46:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.663 16:46:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:44.663 16:46:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.663 16:46:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.038 16:46:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.038 16:46:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:46.038 16:46:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:46.038 16:46:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.038 16:46:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.411 16:46:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.411 00:04:47.411 real 0m3.102s 00:04:47.411 user 0m0.023s 00:04:47.411 sys 0m0.006s 00:04:47.411 16:46:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.411 16:46:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.411 ************************************ 00:04:47.411 END TEST scheduler_create_thread 00:04:47.411 ************************************ 00:04:47.411 16:46:53 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:47.411 16:46:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:47.411 16:46:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4097665 00:04:47.411 16:46:53 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 4097665 ']' 00:04:47.411 16:46:53 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 4097665 00:04:47.411 16:46:53 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:47.411 16:46:53 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.411 16:46:53 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4097665 00:04:47.411 16:46:53 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:47.411 16:46:53 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:47.411 16:46:53 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4097665' 00:04:47.411 killing process with pid 4097665 00:04:47.411 16:46:53 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 4097665 00:04:47.411 16:46:53 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 4097665 00:04:47.411 [2024-07-15 16:46:54.054873] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:47.669 00:04:47.669 real 0m4.747s 00:04:47.669 user 0m9.300s 00:04:47.669 sys 0m0.366s 00:04:47.669 16:46:54 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.669 16:46:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.669 ************************************ 00:04:47.669 END TEST event_scheduler 00:04:47.669 ************************************ 00:04:47.669 16:46:54 event -- common/autotest_common.sh@1142 -- # return 0 00:04:47.669 16:46:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:47.669 16:46:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:47.669 16:46:54 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.669 16:46:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.669 16:46:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.669 ************************************ 00:04:47.669 START TEST app_repeat 00:04:47.669 ************************************ 00:04:47.669 16:46:54 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:47.669 16:46:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.669 16:46:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.669 16:46:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:47.669 16:46:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.669 16:46:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:47.669 16:46:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:47.669 16:46:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:47.669 16:46:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4098498 00:04:47.669 16:46:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.669 16:46:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4098498' 00:04:47.669 Process app_repeat pid: 4098498 00:04:47.669 16:46:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:47.669 16:46:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:47.669 spdk_app_start Round 0 00:04:47.669 16:46:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4098498 /var/tmp/spdk-nbd.sock 00:04:47.669 16:46:54 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4098498 ']' 00:04:47.669 16:46:54 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.670 16:46:54 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.670 16:46:54 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.670 16:46:54 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.670 16:46:54 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:47.670 16:46:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.928 [2024-07-15 16:46:54.357140] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:04:47.928 [2024-07-15 16:46:54.357201] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098498 ] 00:04:47.928 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.928 [2024-07-15 16:46:54.413630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.928 [2024-07-15 16:46:54.493480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.928 [2024-07-15 16:46:54.493483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.496 16:46:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.496 16:46:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:48.496 16:46:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.756 Malloc0 00:04:48.756 16:46:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.015 Malloc1 00:04:49.015 16:46:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.015 16:46:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:49.274 /dev/nbd0 00:04:49.274 16:46:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:49.274 16:46:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.274 1+0 records in 00:04:49.274 1+0 records out 00:04:49.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180214 s, 22.7 MB/s 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:49.274 16:46:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.274 16:46:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.274 16:46:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:49.274 /dev/nbd1 00:04:49.274 16:46:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:49.274 16:46:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.274 1+0 records in 00:04:49.274 1+0 records out 00:04:49.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191926 s, 21.3 MB/s 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:49.274 16:46:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:49.274 16:46:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.274 16:46:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.274 16:46:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.274 16:46:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.533 16:46:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:49.533 { 00:04:49.533 "nbd_device": "/dev/nbd0", 00:04:49.533 "bdev_name": "Malloc0" 00:04:49.533 }, 00:04:49.533 { 00:04:49.533 "nbd_device": "/dev/nbd1", 00:04:49.533 "bdev_name": "Malloc1" 00:04:49.533 } 00:04:49.533 ]' 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:49.533 { 00:04:49.533 "nbd_device": "/dev/nbd0", 00:04:49.533 "bdev_name": "Malloc0" 00:04:49.533 }, 00:04:49.533 { 00:04:49.533 "nbd_device": "/dev/nbd1", 00:04:49.533 "bdev_name": "Malloc1" 00:04:49.533 } 00:04:49.533 ]' 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.533 /dev/nbd1' 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.533 /dev/nbd1' 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.533 256+0 records in 00:04:49.533 256+0 records out 00:04:49.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103394 s, 101 MB/s 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.533 256+0 records in 00:04:49.533 256+0 records out 00:04:49.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132905 s, 78.9 MB/s 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.533 16:46:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.792 256+0 records in 00:04:49.792 256+0 records out 00:04:49.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145187 s, 72.2 MB/s 00:04:49.792 16:46:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.792 16:46:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.792 16:46:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.792 16:46:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.792 16:46:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.792 16:46:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.792 16:46:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.792 16:46:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.792 16:46:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.793 16:46:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:50.051 16:46:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:50.051 16:46:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:50.051 16:46:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:50.051 16:46:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.051 16:46:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.051 16:46:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:50.051 16:46:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.051 16:46:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.051 16:46:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.051 16:46:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.051 16:46:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.310 16:46:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:50.310 16:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:50.310 16:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.310 16:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:50.310 16:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:50.310 16:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.310 16:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:50.310 16:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:50.310 16:46:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:50.310 16:46:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:50.310 16:46:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:50.310 16:46:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:50.310 16:46:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:50.569 16:46:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:50.569 [2024-07-15 16:46:57.212087] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.828 [2024-07-15 16:46:57.279658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.828 [2024-07-15 16:46:57.279666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.828 [2024-07-15 16:46:57.319716] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.828 [2024-07-15 16:46:57.319753] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.115 16:47:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.115 16:47:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:54.115 spdk_app_start Round 1 00:04:54.115 16:47:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4098498 /var/tmp/spdk-nbd.sock 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4098498 ']' 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:54.115 16:47:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.115 Malloc0 00:04:54.115 16:47:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.115 Malloc1 00:04:54.115 16:47:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.115 /dev/nbd0 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.115 1+0 records in 00:04:54.115 1+0 records out 00:04:54.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175993 s, 23.3 MB/s 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:54.115 16:47:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.115 16:47:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.373 /dev/nbd1 00:04:54.373 16:47:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.373 16:47:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.373 1+0 records in 00:04:54.373 1+0 records out 00:04:54.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018392 s, 22.3 MB/s 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:54.373 16:47:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:54.373 16:47:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.373 16:47:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.373 16:47:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.373 16:47:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.373 16:47:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.631 16:47:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.631 { 00:04:54.631 "nbd_device": "/dev/nbd0", 00:04:54.631 "bdev_name": "Malloc0" 00:04:54.631 }, 00:04:54.631 { 00:04:54.631 "nbd_device": "/dev/nbd1", 00:04:54.631 "bdev_name": "Malloc1" 00:04:54.631 } 00:04:54.631 ]' 00:04:54.631 16:47:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.631 { 00:04:54.631 "nbd_device": "/dev/nbd0", 00:04:54.631 "bdev_name": "Malloc0" 00:04:54.631 }, 00:04:54.631 { 00:04:54.631 "nbd_device": "/dev/nbd1", 00:04:54.631 "bdev_name": "Malloc1" 00:04:54.631 } 00:04:54.631 ]' 00:04:54.631 16:47:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.631 16:47:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.631 /dev/nbd1' 00:04:54.631 16:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.632 /dev/nbd1' 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.632 256+0 records in 00:04:54.632 256+0 records out 00:04:54.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103219 s, 102 MB/s 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.632 256+0 records in 00:04:54.632 256+0 records out 00:04:54.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013714 s, 76.5 MB/s 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.632 256+0 records in 00:04:54.632 256+0 records out 00:04:54.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148456 s, 70.6 MB/s 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.632 16:47:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.890 16:47:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.890 16:47:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.890 16:47:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.890 16:47:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.890 16:47:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.890 16:47:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.890 16:47:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.890 16:47:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.890 16:47:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.890 16:47:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.148 16:47:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.148 16:47:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.148 16:47:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.148 16:47:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.148 16:47:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.148 16:47:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.148 16:47:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.148 16:47:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.148 16:47:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.148 16:47:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.148 16:47:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.406 16:47:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.406 16:47:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.406 16:47:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.406 16:47:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.406 16:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.406 16:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.406 16:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.406 16:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.406 16:47:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.406 16:47:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.406 16:47:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.406 16:47:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.406 16:47:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.664 16:47:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.664 [2024-07-15 16:47:02.266775] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.664 [2024-07-15 16:47:02.334101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.664 [2024-07-15 16:47:02.334104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.922 [2024-07-15 16:47:02.375891] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.922 [2024-07-15 16:47:02.375933] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.450 16:47:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:58.450 16:47:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:58.450 spdk_app_start Round 2 00:04:58.450 16:47:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4098498 /var/tmp/spdk-nbd.sock 00:04:58.450 16:47:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4098498 ']' 00:04:58.450 16:47:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.450 16:47:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.450 16:47:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.450 16:47:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.450 16:47:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.707 16:47:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.707 16:47:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:58.707 16:47:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.966 Malloc0 00:04:58.966 16:47:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.966 Malloc1 00:04:58.966 16:47:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.966 16:47:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.966 16:47:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.966 16:47:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.966 16:47:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.966 16:47:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.966 16:47:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.966 16:47:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.224 16:47:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.224 16:47:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.224 16:47:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.224 16:47:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.224 16:47:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.224 16:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.224 16:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.224 16:47:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.224 /dev/nbd0 00:04:59.224 16:47:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.224 16:47:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.224 1+0 records in 00:04:59.224 1+0 records out 00:04:59.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222479 s, 18.4 MB/s 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.224 16:47:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.224 16:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.224 16:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.224 16:47:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.481 /dev/nbd1 00:04:59.481 16:47:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.481 16:47:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.481 1+0 records in 00:04:59.481 1+0 records out 00:04:59.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201363 s, 20.3 MB/s 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.481 16:47:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.481 16:47:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.481 16:47:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.481 16:47:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.481 16:47:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.481 16:47:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.739 { 00:04:59.739 "nbd_device": "/dev/nbd0", 00:04:59.739 "bdev_name": "Malloc0" 00:04:59.739 }, 00:04:59.739 { 00:04:59.739 "nbd_device": "/dev/nbd1", 00:04:59.739 "bdev_name": "Malloc1" 00:04:59.739 } 00:04:59.739 ]' 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.739 { 00:04:59.739 "nbd_device": "/dev/nbd0", 00:04:59.739 "bdev_name": "Malloc0" 00:04:59.739 }, 00:04:59.739 { 00:04:59.739 "nbd_device": "/dev/nbd1", 00:04:59.739 "bdev_name": "Malloc1" 00:04:59.739 } 00:04:59.739 ]' 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.739 /dev/nbd1' 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.739 /dev/nbd1' 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.739 256+0 records in 00:04:59.739 256+0 records out 00:04:59.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010331 s, 101 MB/s 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.739 256+0 records in 00:04:59.739 256+0 records out 00:04:59.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014068 s, 74.5 MB/s 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.739 256+0 records in 00:04:59.739 256+0 records out 00:04:59.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146169 s, 71.7 MB/s 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.739 16:47:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.997 16:47:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.997 16:47:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.997 16:47:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.997 16:47:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.997 16:47:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.997 16:47:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.997 16:47:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.997 16:47:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.997 16:47:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.997 16:47:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.254 16:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.512 16:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.512 16:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.512 16:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.512 16:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.512 16:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.512 16:47:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.512 16:47:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.512 16:47:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.512 16:47:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.513 16:47:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.513 16:47:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.771 [2024-07-15 16:47:07.317180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.771 [2024-07-15 16:47:07.383306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.771 [2024-07-15 16:47:07.383309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.771 [2024-07-15 16:47:07.424640] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.771 [2024-07-15 16:47:07.424679] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.083 16:47:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4098498 /var/tmp/spdk-nbd.sock 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4098498 ']' 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:04.083 16:47:10 event.app_repeat -- event/event.sh@39 -- # killprocess 4098498 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 4098498 ']' 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 4098498 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4098498 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4098498' 00:05:04.083 killing process with pid 4098498 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@967 -- # kill 4098498 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@972 -- # wait 4098498 00:05:04.083 spdk_app_start is called in Round 0. 00:05:04.083 Shutdown signal received, stop current app iteration 00:05:04.083 Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 reinitialization... 00:05:04.083 spdk_app_start is called in Round 1. 00:05:04.083 Shutdown signal received, stop current app iteration 00:05:04.083 Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 reinitialization... 00:05:04.083 spdk_app_start is called in Round 2. 00:05:04.083 Shutdown signal received, stop current app iteration 00:05:04.083 Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 reinitialization... 00:05:04.083 spdk_app_start is called in Round 3. 00:05:04.083 Shutdown signal received, stop current app iteration 00:05:04.083 16:47:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:04.083 16:47:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:04.083 00:05:04.083 real 0m16.187s 00:05:04.083 user 0m35.099s 00:05:04.083 sys 0m2.284s 00:05:04.083 16:47:10 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.084 16:47:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.084 ************************************ 00:05:04.084 END TEST app_repeat 00:05:04.084 ************************************ 00:05:04.084 16:47:10 event -- common/autotest_common.sh@1142 -- # return 0 00:05:04.084 16:47:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:04.084 16:47:10 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:04.084 16:47:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.084 16:47:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.084 16:47:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.084 ************************************ 00:05:04.084 START TEST cpu_locks 00:05:04.084 ************************************ 00:05:04.084 16:47:10 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:04.084 * Looking for test storage... 00:05:04.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:04.084 16:47:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:04.084 16:47:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:04.084 16:47:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:04.084 16:47:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:04.084 16:47:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.084 16:47:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.084 16:47:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.084 ************************************ 00:05:04.084 START TEST default_locks 00:05:04.084 ************************************ 00:05:04.084 16:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:04.084 16:47:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4101445 00:05:04.084 16:47:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4101445 00:05:04.084 16:47:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.084 16:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 4101445 ']' 00:05:04.084 16:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.084 16:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.084 16:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.084 16:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.084 16:47:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.084 [2024-07-15 16:47:10.751005] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:04.084 [2024-07-15 16:47:10.751051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4101445 ] 00:05:04.342 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.342 [2024-07-15 16:47:10.805713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.342 [2024-07-15 16:47:10.885291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.909 16:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.909 16:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:04.909 16:47:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4101445 00:05:04.909 16:47:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4101445 00:05:04.909 16:47:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.476 lslocks: write error 00:05:05.476 16:47:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4101445 00:05:05.476 16:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 4101445 ']' 00:05:05.476 16:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 4101445 00:05:05.476 16:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:05.476 16:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.476 16:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4101445 00:05:05.476 16:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.476 16:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.476 16:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4101445' 00:05:05.476 killing process with pid 4101445 00:05:05.476 16:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 4101445 00:05:05.477 16:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 4101445 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4101445 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4101445 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 4101445 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 4101445 ']' 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4101445) - No such process 00:05:05.736 ERROR: process (pid: 4101445) is no longer running 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:05.736 00:05:05.736 real 0m1.588s 00:05:05.736 user 0m1.670s 00:05:05.736 sys 0m0.536s 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.736 16:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.736 ************************************ 00:05:05.736 END TEST default_locks 00:05:05.736 ************************************ 00:05:05.736 16:47:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:05.736 16:47:12 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:05.736 16:47:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.736 16:47:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.737 16:47:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.737 ************************************ 00:05:05.737 START TEST default_locks_via_rpc 00:05:05.737 ************************************ 00:05:05.737 16:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:05.737 16:47:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4101882 00:05:05.737 16:47:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4101882 00:05:05.737 16:47:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.737 16:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4101882 ']' 00:05:05.737 16:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.737 16:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.737 16:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.737 16:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.737 16:47:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.737 [2024-07-15 16:47:12.402193] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:05.737 [2024-07-15 16:47:12.402239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4101882 ] 00:05:06.003 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.003 [2024-07-15 16:47:12.454760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.003 [2024-07-15 16:47:12.524145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4101882 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4101882 00:05:06.591 16:47:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.851 16:47:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4101882 00:05:06.851 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 4101882 ']' 00:05:06.851 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 4101882 00:05:06.851 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:06.851 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.851 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4101882 00:05:06.851 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.851 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.851 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4101882' 00:05:06.851 killing process with pid 4101882 00:05:06.851 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 4101882 00:05:06.851 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 4101882 00:05:07.115 00:05:07.115 real 0m1.341s 00:05:07.115 user 0m1.404s 00:05:07.115 sys 0m0.410s 00:05:07.115 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.115 16:47:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.115 ************************************ 00:05:07.115 END TEST default_locks_via_rpc 00:05:07.115 ************************************ 00:05:07.115 16:47:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:07.115 16:47:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:07.115 16:47:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.115 16:47:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.115 16:47:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.115 ************************************ 00:05:07.115 START TEST non_locking_app_on_locked_coremask 00:05:07.115 ************************************ 00:05:07.115 16:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:07.115 16:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4102146 00:05:07.115 16:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4102146 /var/tmp/spdk.sock 00:05:07.115 16:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4102146 ']' 00:05:07.115 16:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.115 16:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.115 16:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.115 16:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.115 16:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.115 16:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.373 [2024-07-15 16:47:13.805317] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:07.373 [2024-07-15 16:47:13.805361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4102146 ] 00:05:07.373 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.373 [2024-07-15 16:47:13.857604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.373 [2024-07-15 16:47:13.937373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.940 16:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.940 16:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:07.940 16:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4102161 00:05:07.940 16:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4102161 /var/tmp/spdk2.sock 00:05:07.940 16:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4102161 ']' 00:05:07.940 16:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.940 16:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.940 16:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.940 16:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.940 16:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.940 16:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:08.199 [2024-07-15 16:47:14.637596] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:08.199 [2024-07-15 16:47:14.637643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4102161 ] 00:05:08.199 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.199 [2024-07-15 16:47:14.705956] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:08.199 [2024-07-15 16:47:14.705980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.199 [2024-07-15 16:47:14.858019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.135 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.135 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:09.135 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4102146 00:05:09.135 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4102146 00:05:09.135 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.135 lslocks: write error 00:05:09.135 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4102146 00:05:09.135 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4102146 ']' 00:05:09.135 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4102146 00:05:09.135 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:09.135 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.135 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4102146 00:05:09.393 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.393 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.393 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4102146' 00:05:09.393 killing process with pid 4102146 00:05:09.393 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4102146 00:05:09.394 16:47:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4102146 00:05:09.961 16:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4102161 00:05:09.961 16:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4102161 ']' 00:05:09.961 16:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4102161 00:05:09.961 16:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:09.961 16:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.961 16:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4102161 00:05:09.962 16:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.962 16:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.962 16:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4102161' 00:05:09.962 killing process with pid 4102161 00:05:09.962 16:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4102161 00:05:09.962 16:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4102161 00:05:10.221 00:05:10.221 real 0m3.053s 00:05:10.221 user 0m3.294s 00:05:10.221 sys 0m0.836s 00:05:10.221 16:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.221 16:47:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.221 ************************************ 00:05:10.221 END TEST non_locking_app_on_locked_coremask 00:05:10.221 ************************************ 00:05:10.221 16:47:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:10.221 16:47:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:10.221 16:47:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.221 16:47:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.221 16:47:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.221 ************************************ 00:05:10.221 START TEST locking_app_on_unlocked_coremask 00:05:10.221 ************************************ 00:05:10.221 16:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:10.221 16:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4102652 00:05:10.221 16:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4102652 /var/tmp/spdk.sock 00:05:10.221 16:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4102652 ']' 00:05:10.221 16:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.221 16:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.221 16:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.221 16:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.221 16:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.221 16:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:10.480 [2024-07-15 16:47:16.917893] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:10.481 [2024-07-15 16:47:16.917935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4102652 ] 00:05:10.481 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.481 [2024-07-15 16:47:16.970980] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.481 [2024-07-15 16:47:16.971006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.481 [2024-07-15 16:47:17.050526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.048 16:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.048 16:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:11.048 16:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4102782 00:05:11.048 16:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4102782 /var/tmp/spdk2.sock 00:05:11.048 16:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4102782 ']' 00:05:11.048 16:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.048 16:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.048 16:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.048 16:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.048 16:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.048 16:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:11.307 [2024-07-15 16:47:17.759059] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:11.307 [2024-07-15 16:47:17.759110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4102782 ] 00:05:11.307 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.307 [2024-07-15 16:47:17.836688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.566 [2024-07-15 16:47:17.994640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.133 16:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.133 16:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:12.133 16:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4102782 00:05:12.133 16:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4102782 00:05:12.133 16:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.391 lslocks: write error 00:05:12.391 16:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4102652 00:05:12.391 16:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4102652 ']' 00:05:12.391 16:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 4102652 00:05:12.391 16:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:12.391 16:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.391 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4102652 00:05:12.391 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.391 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.391 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4102652' 00:05:12.391 killing process with pid 4102652 00:05:12.391 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 4102652 00:05:12.391 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 4102652 00:05:13.326 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4102782 00:05:13.326 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4102782 ']' 00:05:13.326 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 4102782 00:05:13.326 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:13.326 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.326 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4102782 00:05:13.326 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.326 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.326 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4102782' 00:05:13.326 killing process with pid 4102782 00:05:13.326 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 4102782 00:05:13.326 16:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 4102782 00:05:13.585 00:05:13.585 real 0m3.145s 00:05:13.585 user 0m3.371s 00:05:13.585 sys 0m0.885s 00:05:13.585 16:47:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.585 16:47:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.585 ************************************ 00:05:13.585 END TEST locking_app_on_unlocked_coremask 00:05:13.585 ************************************ 00:05:13.585 16:47:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:13.585 16:47:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:13.585 16:47:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.585 16:47:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.585 16:47:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.585 ************************************ 00:05:13.585 START TEST locking_app_on_locked_coremask 00:05:13.585 ************************************ 00:05:13.585 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:13.585 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4103155 00:05:13.585 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4103155 /var/tmp/spdk.sock 00:05:13.585 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.585 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4103155 ']' 00:05:13.585 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.585 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.585 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.585 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.585 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.585 [2024-07-15 16:47:20.128881] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:13.585 [2024-07-15 16:47:20.128920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103155 ] 00:05:13.585 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.585 [2024-07-15 16:47:20.180456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.585 [2024-07-15 16:47:20.253169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4103384 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4103384 /var/tmp/spdk2.sock 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4103384 /var/tmp/spdk2.sock 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4103384 /var/tmp/spdk2.sock 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4103384 ']' 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.520 16:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.520 [2024-07-15 16:47:20.968376] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:14.520 [2024-07-15 16:47:20.968429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103384 ] 00:05:14.520 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.520 [2024-07-15 16:47:21.045171] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4103155 has claimed it. 00:05:14.520 [2024-07-15 16:47:21.045208] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:15.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4103384) - No such process 00:05:15.087 ERROR: process (pid: 4103384) is no longer running 00:05:15.087 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.087 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:15.087 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:15.087 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:15.087 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:15.087 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:15.087 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4103155 00:05:15.087 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4103155 00:05:15.087 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.345 lslocks: write error 00:05:15.345 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4103155 00:05:15.345 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4103155 ']' 00:05:15.345 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4103155 00:05:15.345 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:15.345 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.345 16:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4103155 00:05:15.604 16:47:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.604 16:47:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.604 16:47:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4103155' 00:05:15.604 killing process with pid 4103155 00:05:15.604 16:47:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4103155 00:05:15.604 16:47:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4103155 00:05:15.862 00:05:15.862 real 0m2.261s 00:05:15.862 user 0m2.492s 00:05:15.862 sys 0m0.597s 00:05:15.862 16:47:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.862 16:47:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.862 ************************************ 00:05:15.862 END TEST locking_app_on_locked_coremask 00:05:15.862 ************************************ 00:05:15.863 16:47:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:15.863 16:47:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:15.863 16:47:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.863 16:47:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.863 16:47:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.863 ************************************ 00:05:15.863 START TEST locking_overlapped_coremask 00:05:15.863 ************************************ 00:05:15.863 16:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:15.863 16:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4103643 00:05:15.863 16:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4103643 /var/tmp/spdk.sock 00:05:15.863 16:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:15.863 16:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 4103643 ']' 00:05:15.863 16:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.863 16:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.863 16:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.863 16:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.863 16:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.863 [2024-07-15 16:47:22.453378] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:15.863 [2024-07-15 16:47:22.453419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103643 ] 00:05:15.863 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.863 [2024-07-15 16:47:22.506252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.121 [2024-07-15 16:47:22.576460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.121 [2024-07-15 16:47:22.576559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.121 [2024-07-15 16:47:22.576560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4103838 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4103838 /var/tmp/spdk2.sock 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4103838 /var/tmp/spdk2.sock 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4103838 /var/tmp/spdk2.sock 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 4103838 ']' 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.687 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.687 [2024-07-15 16:47:23.299493] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:16.687 [2024-07-15 16:47:23.299543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103838 ] 00:05:16.687 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.011 [2024-07-15 16:47:23.376404] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4103643 has claimed it. 00:05:17.012 [2024-07-15 16:47:23.376442] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:17.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4103838) - No such process 00:05:17.277 ERROR: process (pid: 4103838) is no longer running 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4103643 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 4103643 ']' 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 4103643 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.277 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4103643 00:05:17.536 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.536 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.536 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4103643' 00:05:17.536 killing process with pid 4103643 00:05:17.536 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 4103643 00:05:17.536 16:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 4103643 00:05:17.794 00:05:17.794 real 0m1.880s 00:05:17.794 user 0m5.345s 00:05:17.794 sys 0m0.393s 00:05:17.794 16:47:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.794 16:47:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.794 ************************************ 00:05:17.794 END TEST locking_overlapped_coremask 00:05:17.794 ************************************ 00:05:17.794 16:47:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:17.794 16:47:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:17.794 16:47:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.794 16:47:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.794 16:47:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.794 ************************************ 00:05:17.794 START TEST locking_overlapped_coremask_via_rpc 00:05:17.794 ************************************ 00:05:17.794 16:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:17.794 16:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4103936 00:05:17.794 16:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4103936 /var/tmp/spdk.sock 00:05:17.794 16:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:17.794 16:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4103936 ']' 00:05:17.794 16:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.794 16:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.794 16:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.794 16:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.794 16:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.794 [2024-07-15 16:47:24.400301] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:17.794 [2024-07-15 16:47:24.400341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103936 ] 00:05:17.794 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.794 [2024-07-15 16:47:24.454499] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.794 [2024-07-15 16:47:24.454523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:18.052 [2024-07-15 16:47:24.530087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.052 [2024-07-15 16:47:24.530183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.052 [2024-07-15 16:47:24.530184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.618 16:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.618 16:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:18.618 16:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:18.618 16:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4104148 00:05:18.618 16:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4104148 /var/tmp/spdk2.sock 00:05:18.618 16:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4104148 ']' 00:05:18.618 16:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.618 16:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.618 16:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.619 16:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.619 16:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.619 [2024-07-15 16:47:25.240008] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:18.619 [2024-07-15 16:47:25.240055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104148 ] 00:05:18.619 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.877 [2024-07-15 16:47:25.319694] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.877 [2024-07-15 16:47:25.319724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:18.877 [2024-07-15 16:47:25.466643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.877 [2024-07-15 16:47:25.470263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.877 [2024-07-15 16:47:25.470263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.444 [2024-07-15 16:47:26.061295] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4103936 has claimed it. 00:05:19.444 request: 00:05:19.444 { 00:05:19.444 "method": "framework_enable_cpumask_locks", 00:05:19.444 "req_id": 1 00:05:19.444 } 00:05:19.444 Got JSON-RPC error response 00:05:19.444 response: 00:05:19.444 { 00:05:19.444 "code": -32603, 00:05:19.444 "message": "Failed to claim CPU core: 2" 00:05:19.444 } 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4103936 /var/tmp/spdk.sock 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4103936 ']' 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.444 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.445 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.445 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.703 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.703 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:19.703 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4104148 /var/tmp/spdk2.sock 00:05:19.703 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4104148 ']' 00:05:19.703 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.703 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.703 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.703 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.703 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.962 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.962 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:19.962 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:19.962 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:19.962 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:19.962 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:19.962 00:05:19.962 real 0m2.080s 00:05:19.962 user 0m0.861s 00:05:19.962 sys 0m0.157s 00:05:19.962 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.962 16:47:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.962 ************************************ 00:05:19.962 END TEST locking_overlapped_coremask_via_rpc 00:05:19.962 ************************************ 00:05:19.962 16:47:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:19.962 16:47:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:19.962 16:47:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4103936 ]] 00:05:19.962 16:47:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4103936 00:05:19.962 16:47:26 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4103936 ']' 00:05:19.962 16:47:26 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4103936 00:05:19.962 16:47:26 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:19.962 16:47:26 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.962 16:47:26 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4103936 00:05:19.962 16:47:26 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.962 16:47:26 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.962 16:47:26 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4103936' 00:05:19.962 killing process with pid 4103936 00:05:19.962 16:47:26 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 4103936 00:05:19.962 16:47:26 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 4103936 00:05:20.221 16:47:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4104148 ]] 00:05:20.221 16:47:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4104148 00:05:20.221 16:47:26 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4104148 ']' 00:05:20.221 16:47:26 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4104148 00:05:20.221 16:47:26 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:20.221 16:47:26 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.221 16:47:26 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4104148 00:05:20.221 16:47:26 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:20.221 16:47:26 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:20.221 16:47:26 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4104148' 00:05:20.221 killing process with pid 4104148 00:05:20.221 16:47:26 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 4104148 00:05:20.221 16:47:26 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 4104148 00:05:20.789 16:47:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:20.789 16:47:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:20.789 16:47:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4103936 ]] 00:05:20.789 16:47:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4103936 00:05:20.789 16:47:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4103936 ']' 00:05:20.789 16:47:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4103936 00:05:20.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4103936) - No such process 00:05:20.789 16:47:27 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 4103936 is not found' 00:05:20.789 Process with pid 4103936 is not found 00:05:20.789 16:47:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4104148 ]] 00:05:20.789 16:47:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4104148 00:05:20.789 16:47:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4104148 ']' 00:05:20.789 16:47:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4104148 00:05:20.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4104148) - No such process 00:05:20.789 16:47:27 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 4104148 is not found' 00:05:20.789 Process with pid 4104148 is not found 00:05:20.789 16:47:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:20.789 00:05:20.789 real 0m16.612s 00:05:20.789 user 0m28.846s 00:05:20.789 sys 0m4.691s 00:05:20.789 16:47:27 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.789 16:47:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.789 ************************************ 00:05:20.789 END TEST cpu_locks 00:05:20.789 ************************************ 00:05:20.789 16:47:27 event -- common/autotest_common.sh@1142 -- # return 0 00:05:20.789 00:05:20.789 real 0m41.668s 00:05:20.789 user 1m19.851s 00:05:20.789 sys 0m7.876s 00:05:20.789 16:47:27 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.789 16:47:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.789 ************************************ 00:05:20.789 END TEST event 00:05:20.789 ************************************ 00:05:20.789 16:47:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:20.789 16:47:27 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:20.789 16:47:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.789 16:47:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.789 16:47:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.789 ************************************ 00:05:20.789 START TEST thread 00:05:20.789 ************************************ 00:05:20.789 16:47:27 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:20.789 * Looking for test storage... 00:05:20.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:20.789 16:47:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:20.789 16:47:27 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:20.789 16:47:27 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.789 16:47:27 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.789 ************************************ 00:05:20.789 START TEST thread_poller_perf 00:05:20.789 ************************************ 00:05:20.789 16:47:27 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:20.789 [2024-07-15 16:47:27.437534] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:20.790 [2024-07-15 16:47:27.437602] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104701 ] 00:05:21.048 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.048 [2024-07-15 16:47:27.495681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.048 [2024-07-15 16:47:27.567403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.048 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:21.984 ====================================== 00:05:21.984 busy:2306954434 (cyc) 00:05:21.984 total_run_count: 410000 00:05:21.984 tsc_hz: 2300000000 (cyc) 00:05:21.984 ====================================== 00:05:21.984 poller_cost: 5626 (cyc), 2446 (nsec) 00:05:21.984 00:05:21.984 real 0m1.229s 00:05:21.984 user 0m1.148s 00:05:21.984 sys 0m0.077s 00:05:21.984 16:47:28 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.984 16:47:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.984 ************************************ 00:05:21.984 END TEST thread_poller_perf 00:05:21.984 ************************************ 00:05:22.244 16:47:28 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:22.244 16:47:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:22.244 16:47:28 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:22.244 16:47:28 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.244 16:47:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.244 ************************************ 00:05:22.244 START TEST thread_poller_perf 00:05:22.244 ************************************ 00:05:22.244 16:47:28 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:22.244 [2024-07-15 16:47:28.731945] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:22.244 [2024-07-15 16:47:28.732018] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104920 ] 00:05:22.244 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.244 [2024-07-15 16:47:28.789652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.244 [2024-07-15 16:47:28.861490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.244 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:23.618 ====================================== 00:05:23.618 busy:2301391442 (cyc) 00:05:23.618 total_run_count: 5447000 00:05:23.618 tsc_hz: 2300000000 (cyc) 00:05:23.618 ====================================== 00:05:23.618 poller_cost: 422 (cyc), 183 (nsec) 00:05:23.618 00:05:23.618 real 0m1.222s 00:05:23.618 user 0m1.148s 00:05:23.618 sys 0m0.070s 00:05:23.618 16:47:29 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.618 16:47:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.618 ************************************ 00:05:23.618 END TEST thread_poller_perf 00:05:23.618 ************************************ 00:05:23.618 16:47:29 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:23.618 16:47:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:23.618 00:05:23.618 real 0m2.673s 00:05:23.618 user 0m2.381s 00:05:23.618 sys 0m0.300s 00:05:23.618 16:47:29 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.618 16:47:29 thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.618 ************************************ 00:05:23.618 END TEST thread 00:05:23.618 ************************************ 00:05:23.618 16:47:29 -- common/autotest_common.sh@1142 -- # return 0 00:05:23.618 16:47:29 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:23.618 16:47:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.618 16:47:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.618 16:47:29 -- common/autotest_common.sh@10 -- # set +x 00:05:23.618 ************************************ 00:05:23.618 START TEST accel 00:05:23.618 ************************************ 00:05:23.618 16:47:30 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:23.618 * Looking for test storage... 00:05:23.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:23.618 16:47:30 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:23.618 16:47:30 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:23.618 16:47:30 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.618 16:47:30 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4105229 00:05:23.618 16:47:30 accel -- accel/accel.sh@63 -- # waitforlisten 4105229 00:05:23.618 16:47:30 accel -- common/autotest_common.sh@829 -- # '[' -z 4105229 ']' 00:05:23.618 16:47:30 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.618 16:47:30 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.619 16:47:30 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.619 16:47:30 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.619 16:47:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.619 16:47:30 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:23.619 16:47:30 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:23.619 16:47:30 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.619 16:47:30 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.619 16:47:30 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.619 16:47:30 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.619 16:47:30 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.619 16:47:30 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:23.619 16:47:30 accel -- accel/accel.sh@41 -- # jq -r . 00:05:23.619 [2024-07-15 16:47:30.157431] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:23.619 [2024-07-15 16:47:30.157485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4105229 ] 00:05:23.619 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.619 [2024-07-15 16:47:30.210566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.877 [2024-07-15 16:47:30.292023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.445 16:47:30 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.445 16:47:30 accel -- common/autotest_common.sh@862 -- # return 0 00:05:24.445 16:47:30 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:24.445 16:47:30 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:24.445 16:47:30 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:24.446 16:47:30 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:24.446 16:47:30 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:24.446 16:47:30 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:24.446 16:47:30 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:24.446 16:47:30 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.446 16:47:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.446 16:47:30 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:24.446 16:47:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:24.446 16:47:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:24.446 16:47:30 accel -- accel/accel.sh@75 -- # killprocess 4105229 00:05:24.446 16:47:30 accel -- common/autotest_common.sh@948 -- # '[' -z 4105229 ']' 00:05:24.446 16:47:30 accel -- common/autotest_common.sh@952 -- # kill -0 4105229 00:05:24.446 16:47:30 accel -- common/autotest_common.sh@953 -- # uname 00:05:24.446 16:47:30 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.446 16:47:30 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4105229 00:05:24.446 16:47:31 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.446 16:47:31 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.446 16:47:31 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4105229' 00:05:24.446 killing process with pid 4105229 00:05:24.446 16:47:31 accel -- common/autotest_common.sh@967 -- # kill 4105229 00:05:24.446 16:47:31 accel -- common/autotest_common.sh@972 -- # wait 4105229 00:05:24.704 16:47:31 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:24.704 16:47:31 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:24.704 16:47:31 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:24.704 16:47:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.704 16:47:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.704 16:47:31 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:24.704 16:47:31 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:24.704 16:47:31 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:24.704 16:47:31 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.704 16:47:31 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.704 16:47:31 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.704 16:47:31 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.704 16:47:31 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.704 16:47:31 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:24.704 16:47:31 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:24.963 16:47:31 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.963 16:47:31 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:24.963 16:47:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.963 16:47:31 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:24.963 16:47:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:24.963 16:47:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.963 16:47:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.963 ************************************ 00:05:24.963 START TEST accel_missing_filename 00:05:24.963 ************************************ 00:05:24.963 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:24.963 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:24.963 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:24.963 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:24.963 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.963 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:24.963 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.963 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:24.963 16:47:31 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:24.963 16:47:31 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:24.963 16:47:31 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.963 16:47:31 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.963 16:47:31 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.964 16:47:31 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.964 16:47:31 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.964 16:47:31 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:24.964 16:47:31 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:24.964 [2024-07-15 16:47:31.495024] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:24.964 [2024-07-15 16:47:31.495100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4105486 ] 00:05:24.964 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.964 [2024-07-15 16:47:31.551330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.964 [2024-07-15 16:47:31.622916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.222 [2024-07-15 16:47:31.663128] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:25.222 [2024-07-15 16:47:31.722632] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:25.222 A filename is required. 00:05:25.222 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:25.222 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.222 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:25.222 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:25.222 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:25.222 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.222 00:05:25.222 real 0m0.329s 00:05:25.222 user 0m0.260s 00:05:25.222 sys 0m0.108s 00:05:25.222 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.222 16:47:31 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:25.222 ************************************ 00:05:25.222 END TEST accel_missing_filename 00:05:25.222 ************************************ 00:05:25.222 16:47:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.222 16:47:31 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.222 16:47:31 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:25.222 16:47:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.222 16:47:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.222 ************************************ 00:05:25.222 START TEST accel_compress_verify 00:05:25.222 ************************************ 00:05:25.222 16:47:31 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.222 16:47:31 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:25.222 16:47:31 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.222 16:47:31 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:25.222 16:47:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.222 16:47:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:25.222 16:47:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.222 16:47:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.222 16:47:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.222 16:47:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:25.222 16:47:31 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.222 16:47:31 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.222 16:47:31 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.223 16:47:31 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.223 16:47:31 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.223 16:47:31 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:25.223 16:47:31 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:25.223 [2024-07-15 16:47:31.879908] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:25.223 [2024-07-15 16:47:31.879977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4105535 ] 00:05:25.481 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.481 [2024-07-15 16:47:31.936195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.481 [2024-07-15 16:47:32.008110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.481 [2024-07-15 16:47:32.048935] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:25.481 [2024-07-15 16:47:32.108662] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:25.741 00:05:25.741 Compression does not support the verify option, aborting. 00:05:25.741 16:47:32 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:25.741 16:47:32 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.741 16:47:32 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:25.741 16:47:32 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:25.741 16:47:32 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:25.741 16:47:32 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.741 00:05:25.741 real 0m0.330s 00:05:25.741 user 0m0.256s 00:05:25.741 sys 0m0.116s 00:05:25.741 16:47:32 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.741 16:47:32 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:25.741 ************************************ 00:05:25.741 END TEST accel_compress_verify 00:05:25.741 ************************************ 00:05:25.741 16:47:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.741 16:47:32 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:25.741 16:47:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:25.741 16:47:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.741 16:47:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.741 ************************************ 00:05:25.741 START TEST accel_wrong_workload 00:05:25.741 ************************************ 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:25.741 16:47:32 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:25.741 16:47:32 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:25.741 16:47:32 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.741 16:47:32 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.741 16:47:32 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.741 16:47:32 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.741 16:47:32 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.741 16:47:32 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:25.741 16:47:32 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:25.741 Unsupported workload type: foobar 00:05:25.741 [2024-07-15 16:47:32.263880] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:25.741 accel_perf options: 00:05:25.741 [-h help message] 00:05:25.741 [-q queue depth per core] 00:05:25.741 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:25.741 [-T number of threads per core 00:05:25.741 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:25.741 [-t time in seconds] 00:05:25.741 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:25.741 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:25.741 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:25.741 [-l for compress/decompress workloads, name of uncompressed input file 00:05:25.741 [-S for crc32c workload, use this seed value (default 0) 00:05:25.741 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:25.741 [-f for fill workload, use this BYTE value (default 255) 00:05:25.741 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:25.741 [-y verify result if this switch is on] 00:05:25.741 [-a tasks to allocate per core (default: same value as -q)] 00:05:25.741 Can be used to spread operations across a wider range of memory. 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.741 00:05:25.741 real 0m0.032s 00:05:25.741 user 0m0.047s 00:05:25.741 sys 0m0.011s 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.741 16:47:32 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:25.741 ************************************ 00:05:25.741 END TEST accel_wrong_workload 00:05:25.741 ************************************ 00:05:25.741 16:47:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.741 16:47:32 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:25.741 16:47:32 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:25.741 16:47:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.741 16:47:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.741 ************************************ 00:05:25.741 START TEST accel_negative_buffers 00:05:25.741 ************************************ 00:05:25.741 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:25.741 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:25.741 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:25.741 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:25.741 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.741 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:25.741 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.741 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:25.741 16:47:32 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:25.741 16:47:32 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:25.741 16:47:32 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.741 16:47:32 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.741 16:47:32 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.741 16:47:32 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.741 16:47:32 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.741 16:47:32 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:25.741 16:47:32 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:25.741 -x option must be non-negative. 00:05:25.741 [2024-07-15 16:47:32.361538] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:25.741 accel_perf options: 00:05:25.741 [-h help message] 00:05:25.741 [-q queue depth per core] 00:05:25.741 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:25.741 [-T number of threads per core 00:05:25.741 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:25.741 [-t time in seconds] 00:05:25.742 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:25.742 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:25.742 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:25.742 [-l for compress/decompress workloads, name of uncompressed input file 00:05:25.742 [-S for crc32c workload, use this seed value (default 0) 00:05:25.742 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:25.742 [-f for fill workload, use this BYTE value (default 255) 00:05:25.742 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:25.742 [-y verify result if this switch is on] 00:05:25.742 [-a tasks to allocate per core (default: same value as -q)] 00:05:25.742 Can be used to spread operations across a wider range of memory. 00:05:25.742 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:25.742 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.742 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.742 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.742 00:05:25.742 real 0m0.034s 00:05:25.742 user 0m0.022s 00:05:25.742 sys 0m0.012s 00:05:25.742 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.742 16:47:32 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:25.742 ************************************ 00:05:25.742 END TEST accel_negative_buffers 00:05:25.742 ************************************ 00:05:25.742 Error: writing output failed: Broken pipe 00:05:25.742 16:47:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.742 16:47:32 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:25.742 16:47:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:25.742 16:47:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.742 16:47:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.002 ************************************ 00:05:26.002 START TEST accel_crc32c 00:05:26.002 ************************************ 00:05:26.002 16:47:32 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:26.002 [2024-07-15 16:47:32.461550] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:26.002 [2024-07-15 16:47:32.461604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4105602 ] 00:05:26.002 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.002 [2024-07-15 16:47:32.519915] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.002 [2024-07-15 16:47:32.604011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.002 16:47:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:27.377 16:47:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.377 00:05:27.377 real 0m1.349s 00:05:27.377 user 0m1.244s 00:05:27.377 sys 0m0.119s 00:05:27.377 16:47:33 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.377 16:47:33 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:27.377 ************************************ 00:05:27.377 END TEST accel_crc32c 00:05:27.377 ************************************ 00:05:27.377 16:47:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.377 16:47:33 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:27.377 16:47:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:27.377 16:47:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.377 16:47:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.377 ************************************ 00:05:27.377 START TEST accel_crc32c_C2 00:05:27.377 ************************************ 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:27.377 16:47:33 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:27.377 [2024-07-15 16:47:33.871261] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:27.377 [2024-07-15 16:47:33.871331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4105848 ] 00:05:27.377 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.377 [2024-07-15 16:47:33.929438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.377 [2024-07-15 16:47:34.005141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.377 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.636 16:47:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.573 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.574 00:05:28.574 real 0m1.342s 00:05:28.574 user 0m1.235s 00:05:28.574 sys 0m0.119s 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.574 16:47:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:28.574 ************************************ 00:05:28.574 END TEST accel_crc32c_C2 00:05:28.574 ************************************ 00:05:28.574 16:47:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:28.574 16:47:35 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:28.574 16:47:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:28.574 16:47:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.574 16:47:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.574 ************************************ 00:05:28.574 START TEST accel_copy 00:05:28.574 ************************************ 00:05:28.574 16:47:35 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:28.574 16:47:35 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:28.833 [2024-07-15 16:47:35.268373] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:28.833 [2024-07-15 16:47:35.268442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4106109 ] 00:05:28.833 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.833 [2024-07-15 16:47:35.323163] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.833 [2024-07-15 16:47:35.395075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.833 16:47:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:30.210 16:47:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.210 00:05:30.210 real 0m1.331s 00:05:30.210 user 0m1.224s 00:05:30.210 sys 0m0.120s 00:05:30.210 16:47:36 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.210 16:47:36 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:30.210 ************************************ 00:05:30.210 END TEST accel_copy 00:05:30.210 ************************************ 00:05:30.210 16:47:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.210 16:47:36 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:30.210 16:47:36 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:30.210 16:47:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.210 16:47:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.210 ************************************ 00:05:30.210 START TEST accel_fill 00:05:30.210 ************************************ 00:05:30.210 16:47:36 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:30.210 [2024-07-15 16:47:36.665720] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:30.210 [2024-07-15 16:47:36.665770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4106361 ] 00:05:30.210 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.210 [2024-07-15 16:47:36.719911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.210 [2024-07-15 16:47:36.792199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.211 16:47:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.584 16:47:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:31.585 16:47:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.585 00:05:31.585 real 0m1.331s 00:05:31.585 user 0m1.232s 00:05:31.585 sys 0m0.111s 00:05:31.585 16:47:37 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.585 16:47:37 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:31.585 ************************************ 00:05:31.585 END TEST accel_fill 00:05:31.585 ************************************ 00:05:31.585 16:47:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.585 16:47:38 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:31.585 16:47:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:31.585 16:47:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.585 16:47:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.585 ************************************ 00:05:31.585 START TEST accel_copy_crc32c 00:05:31.585 ************************************ 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:31.585 [2024-07-15 16:47:38.063854] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:31.585 [2024-07-15 16:47:38.063914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4106611 ] 00:05:31.585 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.585 [2024-07-15 16:47:38.120932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.585 [2024-07-15 16:47:38.193819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.585 16:47:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.972 00:05:32.972 real 0m1.337s 00:05:32.972 user 0m1.230s 00:05:32.972 sys 0m0.121s 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.972 16:47:39 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:32.972 ************************************ 00:05:32.972 END TEST accel_copy_crc32c 00:05:32.972 ************************************ 00:05:32.972 16:47:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.972 16:47:39 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:32.972 16:47:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:32.972 16:47:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.972 16:47:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.972 ************************************ 00:05:32.972 START TEST accel_copy_crc32c_C2 00:05:32.972 ************************************ 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:32.972 [2024-07-15 16:47:39.451876] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:32.972 [2024-07-15 16:47:39.451912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4106872 ] 00:05:32.972 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.972 [2024-07-15 16:47:39.505247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.972 [2024-07-15 16:47:39.577642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.972 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.973 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.281 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.281 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.281 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.281 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.281 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.281 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.281 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.281 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.281 16:47:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.226 00:05:34.226 real 0m1.319s 00:05:34.226 user 0m1.227s 00:05:34.226 sys 0m0.106s 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.226 16:47:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:34.226 ************************************ 00:05:34.226 END TEST accel_copy_crc32c_C2 00:05:34.226 ************************************ 00:05:34.226 16:47:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.226 16:47:40 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:34.226 16:47:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:34.226 16:47:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.226 16:47:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.226 ************************************ 00:05:34.226 START TEST accel_dualcast 00:05:34.226 ************************************ 00:05:34.226 16:47:40 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:34.226 16:47:40 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:34.226 [2024-07-15 16:47:40.838042] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:34.226 [2024-07-15 16:47:40.838078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4107118 ] 00:05:34.226 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.226 [2024-07-15 16:47:40.890741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.484 [2024-07-15 16:47:40.963244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.484 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.485 16:47:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:35.860 16:47:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.860 00:05:35.860 real 0m1.320s 00:05:35.860 user 0m1.225s 00:05:35.860 sys 0m0.109s 00:05:35.860 16:47:42 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.860 16:47:42 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:35.860 ************************************ 00:05:35.860 END TEST accel_dualcast 00:05:35.860 ************************************ 00:05:35.860 16:47:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.860 16:47:42 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:35.860 16:47:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:35.860 16:47:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.860 16:47:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.860 ************************************ 00:05:35.860 START TEST accel_compare 00:05:35.860 ************************************ 00:05:35.860 16:47:42 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:35.860 [2024-07-15 16:47:42.219562] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:35.860 [2024-07-15 16:47:42.219611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4107372 ] 00:05:35.860 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.860 [2024-07-15 16:47:42.268169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.860 [2024-07-15 16:47:42.341244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.860 16:47:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.238 16:47:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.239 16:47:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.239 16:47:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.239 16:47:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:37.239 16:47:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.239 00:05:37.239 real 0m1.317s 00:05:37.239 user 0m1.225s 00:05:37.239 sys 0m0.104s 00:05:37.239 16:47:43 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.239 16:47:43 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:37.239 ************************************ 00:05:37.239 END TEST accel_compare 00:05:37.239 ************************************ 00:05:37.239 16:47:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.239 16:47:43 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:37.239 16:47:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:37.239 16:47:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.239 16:47:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.239 ************************************ 00:05:37.239 START TEST accel_xor 00:05:37.239 ************************************ 00:05:37.239 16:47:43 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:37.239 [2024-07-15 16:47:43.614564] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:37.239 [2024-07-15 16:47:43.614635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4107622 ] 00:05:37.239 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.239 [2024-07-15 16:47:43.669785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.239 [2024-07-15 16:47:43.742615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.239 16:47:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.619 00:05:38.619 real 0m1.336s 00:05:38.619 user 0m1.238s 00:05:38.619 sys 0m0.111s 00:05:38.619 16:47:44 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.619 16:47:44 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:38.619 ************************************ 00:05:38.619 END TEST accel_xor 00:05:38.619 ************************************ 00:05:38.619 16:47:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.619 16:47:44 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:38.619 16:47:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:38.619 16:47:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.619 16:47:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.619 ************************************ 00:05:38.619 START TEST accel_xor 00:05:38.619 ************************************ 00:05:38.619 16:47:44 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:38.619 16:47:44 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:38.619 [2024-07-15 16:47:45.007949] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:38.619 [2024-07-15 16:47:45.008014] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4107881 ] 00:05:38.619 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.619 [2024-07-15 16:47:45.064427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.619 [2024-07-15 16:47:45.137016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:38.619 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.620 16:47:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.998 16:47:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.999 16:47:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.999 16:47:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:39.999 16:47:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.999 00:05:39.999 real 0m1.337s 00:05:39.999 user 0m1.241s 00:05:39.999 sys 0m0.110s 00:05:39.999 16:47:46 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.999 16:47:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:39.999 ************************************ 00:05:39.999 END TEST accel_xor 00:05:39.999 ************************************ 00:05:39.999 16:47:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.999 16:47:46 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:39.999 16:47:46 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:39.999 16:47:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.999 16:47:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.999 ************************************ 00:05:39.999 START TEST accel_dif_verify 00:05:39.999 ************************************ 00:05:39.999 16:47:46 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:39.999 [2024-07-15 16:47:46.408487] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:39.999 [2024-07-15 16:47:46.408534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108153 ] 00:05:39.999 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.999 [2024-07-15 16:47:46.463273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.999 [2024-07-15 16:47:46.537411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.999 16:47:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:41.379 16:47:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.379 00:05:41.379 real 0m1.334s 00:05:41.379 user 0m1.236s 00:05:41.379 sys 0m0.113s 00:05:41.379 16:47:47 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.379 16:47:47 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:41.379 ************************************ 00:05:41.379 END TEST accel_dif_verify 00:05:41.379 ************************************ 00:05:41.379 16:47:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.379 16:47:47 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:41.379 16:47:47 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:41.379 16:47:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.379 16:47:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.379 ************************************ 00:05:41.379 START TEST accel_dif_generate 00:05:41.379 ************************************ 00:05:41.379 16:47:47 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:41.379 [2024-07-15 16:47:47.813164] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:41.379 [2024-07-15 16:47:47.813211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108439 ] 00:05:41.379 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.379 [2024-07-15 16:47:47.867084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.379 [2024-07-15 16:47:47.939765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.379 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.380 16:47:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:42.759 16:47:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.759 00:05:42.759 real 0m1.331s 00:05:42.759 user 0m1.238s 00:05:42.759 sys 0m0.108s 00:05:42.759 16:47:49 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.759 16:47:49 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:42.759 ************************************ 00:05:42.759 END TEST accel_dif_generate 00:05:42.759 ************************************ 00:05:42.759 16:47:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.759 16:47:49 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:42.759 16:47:49 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:42.759 16:47:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.759 16:47:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.759 ************************************ 00:05:42.759 START TEST accel_dif_generate_copy 00:05:42.759 ************************************ 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:42.759 [2024-07-15 16:47:49.212723] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:42.759 [2024-07-15 16:47:49.212783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108705 ] 00:05:42.759 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.759 [2024-07-15 16:47:49.269444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.759 [2024-07-15 16:47:49.341809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 16:47:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.139 00:05:44.139 real 0m1.338s 00:05:44.139 user 0m1.244s 00:05:44.139 sys 0m0.109s 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.139 16:47:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:44.139 ************************************ 00:05:44.139 END TEST accel_dif_generate_copy 00:05:44.139 ************************************ 00:05:44.139 16:47:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.139 16:47:50 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:44.139 16:47:50 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:44.139 16:47:50 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:44.139 16:47:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.139 16:47:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.139 ************************************ 00:05:44.139 START TEST accel_comp 00:05:44.139 ************************************ 00:05:44.139 16:47:50 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:44.139 [2024-07-15 16:47:50.617662] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:44.139 [2024-07-15 16:47:50.617717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108966 ] 00:05:44.139 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.139 [2024-07-15 16:47:50.674295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.139 [2024-07-15 16:47:50.748046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.139 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.140 16:47:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:45.519 16:47:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.519 00:05:45.519 real 0m1.340s 00:05:45.519 user 0m1.242s 00:05:45.519 sys 0m0.111s 00:05:45.519 16:47:51 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.519 16:47:51 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:45.519 ************************************ 00:05:45.519 END TEST accel_comp 00:05:45.519 ************************************ 00:05:45.519 16:47:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.519 16:47:51 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:45.519 16:47:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:45.519 16:47:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.519 16:47:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.519 ************************************ 00:05:45.519 START TEST accel_decomp 00:05:45.519 ************************************ 00:05:45.519 16:47:51 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:45.519 16:47:51 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:45.519 [2024-07-15 16:47:52.004816] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:45.519 [2024-07-15 16:47:52.004864] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109218 ] 00:05:45.519 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.519 [2024-07-15 16:47:52.058751] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.519 [2024-07-15 16:47:52.130984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.519 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.778 16:47:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:46.715 16:47:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.715 00:05:46.715 real 0m1.325s 00:05:46.715 user 0m1.221s 00:05:46.715 sys 0m0.119s 00:05:46.715 16:47:53 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.715 16:47:53 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:46.715 ************************************ 00:05:46.715 END TEST accel_decomp 00:05:46.715 ************************************ 00:05:46.715 16:47:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.715 16:47:53 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:46.715 16:47:53 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:46.715 16:47:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.715 16:47:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.715 ************************************ 00:05:46.715 START TEST accel_decomp_full 00:05:46.715 ************************************ 00:05:46.715 16:47:53 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:46.715 16:47:53 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:46.715 16:47:53 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:46.715 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.715 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.715 16:47:53 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:46.715 16:47:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:46.715 16:47:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:46.715 16:47:53 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.715 16:47:53 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:46.975 [2024-07-15 16:47:53.404984] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:46.975 [2024-07-15 16:47:53.405034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109476 ] 00:05:46.975 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.975 [2024-07-15 16:47:53.458882] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.975 [2024-07-15 16:47:53.531035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.975 16:47:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.353 16:47:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.353 16:47:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.353 16:47:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.353 16:47:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.353 16:47:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.353 16:47:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.353 16:47:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.353 16:47:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.353 16:47:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:48.354 16:47:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.354 00:05:48.354 real 0m1.346s 00:05:48.354 user 0m1.241s 00:05:48.354 sys 0m0.117s 00:05:48.354 16:47:54 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.354 16:47:54 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:48.354 ************************************ 00:05:48.354 END TEST accel_decomp_full 00:05:48.354 ************************************ 00:05:48.354 16:47:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.354 16:47:54 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:48.354 16:47:54 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:48.354 16:47:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.354 16:47:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.354 ************************************ 00:05:48.354 START TEST accel_decomp_mcore 00:05:48.354 ************************************ 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:48.354 [2024-07-15 16:47:54.818559] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:48.354 [2024-07-15 16:47:54.818607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109740 ] 00:05:48.354 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.354 [2024-07-15 16:47:54.873134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.354 [2024-07-15 16:47:54.948049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.354 [2024-07-15 16:47:54.948147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.354 [2024-07-15 16:47:54.948233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.354 [2024-07-15 16:47:54.948234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.354 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:48.354 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:48.354 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.354 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.354 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.354 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.354 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.355 16:47:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.733 00:05:49.733 real 0m1.349s 00:05:49.733 user 0m4.571s 00:05:49.733 sys 0m0.122s 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.733 16:47:56 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:49.733 ************************************ 00:05:49.733 END TEST accel_decomp_mcore 00:05:49.733 ************************************ 00:05:49.733 16:47:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.733 16:47:56 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:49.733 16:47:56 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:49.733 16:47:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.733 16:47:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.733 ************************************ 00:05:49.733 START TEST accel_decomp_full_mcore 00:05:49.733 ************************************ 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:49.733 [2024-07-15 16:47:56.222520] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:49.733 [2024-07-15 16:47:56.222567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110002 ] 00:05:49.733 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.733 [2024-07-15 16:47:56.277298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.733 [2024-07-15 16:47:56.351973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.733 [2024-07-15 16:47:56.352072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.733 [2024-07-15 16:47:56.352159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.733 [2024-07-15 16:47:56.352161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.733 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.734 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.993 16:47:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.986 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.987 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.987 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.987 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.987 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.987 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.987 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.987 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.987 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:50.987 16:47:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.987 00:05:50.987 real 0m1.359s 00:05:50.987 user 0m4.617s 00:05:50.987 sys 0m0.122s 00:05:50.987 16:47:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.987 16:47:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:50.987 ************************************ 00:05:50.987 END TEST accel_decomp_full_mcore 00:05:50.987 ************************************ 00:05:50.987 16:47:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.987 16:47:57 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:50.987 16:47:57 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:50.987 16:47:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.987 16:47:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.987 ************************************ 00:05:50.987 START TEST accel_decomp_mthread 00:05:50.987 ************************************ 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:50.987 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:50.987 [2024-07-15 16:47:57.640945] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:50.987 [2024-07-15 16:47:57.641006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110282 ] 00:05:51.246 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.246 [2024-07-15 16:47:57.695806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.246 [2024-07-15 16:47:57.767483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.246 16:47:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.621 00:05:52.621 real 0m1.339s 00:05:52.621 user 0m1.237s 00:05:52.621 sys 0m0.116s 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.621 16:47:58 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:52.621 ************************************ 00:05:52.621 END TEST accel_decomp_mthread 00:05:52.621 ************************************ 00:05:52.621 16:47:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.621 16:47:58 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:52.621 16:47:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:52.621 16:47:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.621 16:47:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.621 ************************************ 00:05:52.621 START TEST accel_decomp_full_mthread 00:05:52.621 ************************************ 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:52.621 [2024-07-15 16:47:59.040964] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:52.621 [2024-07-15 16:47:59.041029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110547 ] 00:05:52.621 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.621 [2024-07-15 16:47:59.096076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.621 [2024-07-15 16:47:59.167770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.621 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.622 16:47:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.996 00:05:53.996 real 0m1.362s 00:05:53.996 user 0m1.259s 00:05:53.996 sys 0m0.117s 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.996 16:48:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:53.996 ************************************ 00:05:53.996 END TEST accel_decomp_full_mthread 00:05:53.996 ************************************ 00:05:53.996 16:48:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.996 16:48:00 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:53.996 16:48:00 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:53.996 16:48:00 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:53.996 16:48:00 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:53.996 16:48:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.996 16:48:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.996 16:48:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.996 16:48:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.996 16:48:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.996 16:48:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.996 16:48:00 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.996 16:48:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:53.996 16:48:00 accel -- accel/accel.sh@41 -- # jq -r . 00:05:53.996 ************************************ 00:05:53.996 START TEST accel_dif_functional_tests 00:05:53.996 ************************************ 00:05:53.996 16:48:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:53.996 [2024-07-15 16:48:00.486085] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:53.996 [2024-07-15 16:48:00.486123] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110849 ] 00:05:53.996 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.996 [2024-07-15 16:48:00.538883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.996 [2024-07-15 16:48:00.613096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.996 [2024-07-15 16:48:00.613191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.996 [2024-07-15 16:48:00.613192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.256 00:05:54.256 00:05:54.256 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.256 http://cunit.sourceforge.net/ 00:05:54.256 00:05:54.256 00:05:54.256 Suite: accel_dif 00:05:54.256 Test: verify: DIF generated, GUARD check ...passed 00:05:54.256 Test: verify: DIF generated, APPTAG check ...passed 00:05:54.256 Test: verify: DIF generated, REFTAG check ...passed 00:05:54.256 Test: verify: DIF not generated, GUARD check ...[2024-07-15 16:48:00.681469] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:54.256 passed 00:05:54.256 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 16:48:00.681517] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:54.256 passed 00:05:54.256 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 16:48:00.681551] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:54.256 passed 00:05:54.256 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:54.256 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 16:48:00.681593] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:54.256 passed 00:05:54.256 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:54.256 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:54.256 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:54.256 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 16:48:00.681695] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:54.256 passed 00:05:54.256 Test: verify copy: DIF generated, GUARD check ...passed 00:05:54.256 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:54.256 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:54.256 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 16:48:00.681801] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:54.256 passed 00:05:54.256 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 16:48:00.681824] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:54.256 passed 00:05:54.256 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 16:48:00.681844] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:54.256 passed 00:05:54.256 Test: generate copy: DIF generated, GUARD check ...passed 00:05:54.256 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:54.256 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:54.256 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:54.256 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:54.256 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:54.256 Test: generate copy: iovecs-len validate ...[2024-07-15 16:48:00.682010] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:54.256 passed 00:05:54.256 Test: generate copy: buffer alignment validate ...passed 00:05:54.256 00:05:54.256 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.256 suites 1 1 n/a 0 0 00:05:54.256 tests 26 26 26 0 0 00:05:54.256 asserts 115 115 115 0 n/a 00:05:54.256 00:05:54.256 Elapsed time = 0.000 seconds 00:05:54.256 00:05:54.256 real 0m0.406s 00:05:54.256 user 0m0.616s 00:05:54.256 sys 0m0.145s 00:05:54.256 16:48:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.256 16:48:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:54.256 ************************************ 00:05:54.256 END TEST accel_dif_functional_tests 00:05:54.256 ************************************ 00:05:54.256 16:48:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.256 00:05:54.256 real 0m30.854s 00:05:54.256 user 0m34.712s 00:05:54.256 sys 0m4.196s 00:05:54.256 16:48:00 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.256 16:48:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.256 ************************************ 00:05:54.256 END TEST accel 00:05:54.256 ************************************ 00:05:54.256 16:48:00 -- common/autotest_common.sh@1142 -- # return 0 00:05:54.256 16:48:00 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:54.256 16:48:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.256 16:48:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.256 16:48:00 -- common/autotest_common.sh@10 -- # set +x 00:05:54.515 ************************************ 00:05:54.515 START TEST accel_rpc 00:05:54.515 ************************************ 00:05:54.515 16:48:00 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:54.515 * Looking for test storage... 00:05:54.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:54.515 16:48:01 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:54.515 16:48:01 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4110949 00:05:54.515 16:48:01 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 4110949 00:05:54.515 16:48:01 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:54.515 16:48:01 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 4110949 ']' 00:05:54.515 16:48:01 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.515 16:48:01 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.515 16:48:01 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.515 16:48:01 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.515 16:48:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.515 [2024-07-15 16:48:01.068682] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:54.515 [2024-07-15 16:48:01.068730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110949 ] 00:05:54.515 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.515 [2024-07-15 16:48:01.121606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.773 [2024-07-15 16:48:01.196327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.341 16:48:01 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.341 16:48:01 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:55.341 16:48:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:55.341 16:48:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:55.341 16:48:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:55.341 16:48:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:55.342 16:48:01 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:55.342 16:48:01 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.342 16:48:01 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.342 16:48:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.342 ************************************ 00:05:55.342 START TEST accel_assign_opcode 00:05:55.342 ************************************ 00:05:55.342 16:48:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:55.342 16:48:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:55.342 16:48:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.342 16:48:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:55.342 [2024-07-15 16:48:01.902419] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:55.342 16:48:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.342 16:48:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:55.342 16:48:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.342 16:48:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:55.342 [2024-07-15 16:48:01.910433] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:55.342 16:48:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.342 16:48:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:55.342 16:48:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.342 16:48:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:55.601 16:48:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.601 16:48:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:55.601 16:48:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:55.601 16:48:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.601 16:48:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:55.601 16:48:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:55.601 16:48:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.601 software 00:05:55.601 00:05:55.601 real 0m0.233s 00:05:55.601 user 0m0.048s 00:05:55.601 sys 0m0.007s 00:05:55.601 16:48:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.601 16:48:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:55.601 ************************************ 00:05:55.601 END TEST accel_assign_opcode 00:05:55.601 ************************************ 00:05:55.601 16:48:02 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:55.601 16:48:02 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 4110949 00:05:55.601 16:48:02 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 4110949 ']' 00:05:55.601 16:48:02 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 4110949 00:05:55.601 16:48:02 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:55.601 16:48:02 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.601 16:48:02 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4110949 00:05:55.601 16:48:02 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.601 16:48:02 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.601 16:48:02 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4110949' 00:05:55.601 killing process with pid 4110949 00:05:55.601 16:48:02 accel_rpc -- common/autotest_common.sh@967 -- # kill 4110949 00:05:55.601 16:48:02 accel_rpc -- common/autotest_common.sh@972 -- # wait 4110949 00:05:55.859 00:05:55.859 real 0m1.579s 00:05:55.859 user 0m1.660s 00:05:55.859 sys 0m0.412s 00:05:55.859 16:48:02 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.859 16:48:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.859 ************************************ 00:05:55.859 END TEST accel_rpc 00:05:55.859 ************************************ 00:05:56.117 16:48:02 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.117 16:48:02 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:56.117 16:48:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.117 16:48:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.117 16:48:02 -- common/autotest_common.sh@10 -- # set +x 00:05:56.117 ************************************ 00:05:56.117 START TEST app_cmdline 00:05:56.117 ************************************ 00:05:56.117 16:48:02 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:56.117 * Looking for test storage... 00:05:56.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:56.117 16:48:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:56.117 16:48:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4111320 00:05:56.117 16:48:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4111320 00:05:56.117 16:48:02 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:56.117 16:48:02 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 4111320 ']' 00:05:56.117 16:48:02 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.117 16:48:02 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.117 16:48:02 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.117 16:48:02 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.117 16:48:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:56.117 [2024-07-15 16:48:02.717190] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:05:56.117 [2024-07-15 16:48:02.717243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4111320 ] 00:05:56.117 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.117 [2024-07-15 16:48:02.770042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.376 [2024-07-15 16:48:02.849923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.943 16:48:03 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.943 16:48:03 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:56.943 16:48:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:57.202 { 00:05:57.202 "version": "SPDK v24.09-pre git sha1 44e72e4e7", 00:05:57.202 "fields": { 00:05:57.202 "major": 24, 00:05:57.202 "minor": 9, 00:05:57.202 "patch": 0, 00:05:57.202 "suffix": "-pre", 00:05:57.202 "commit": "44e72e4e7" 00:05:57.202 } 00:05:57.202 } 00:05:57.202 16:48:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:57.202 16:48:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:57.202 16:48:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:57.202 16:48:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:57.202 16:48:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:57.202 16:48:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:57.202 16:48:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.202 16:48:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:57.202 16:48:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:57.202 16:48:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:57.202 16:48:03 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.461 request: 00:05:57.461 { 00:05:57.461 "method": "env_dpdk_get_mem_stats", 00:05:57.461 "req_id": 1 00:05:57.461 } 00:05:57.461 Got JSON-RPC error response 00:05:57.461 response: 00:05:57.461 { 00:05:57.461 "code": -32601, 00:05:57.461 "message": "Method not found" 00:05:57.461 } 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.461 16:48:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4111320 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 4111320 ']' 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 4111320 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4111320 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4111320' 00:05:57.461 killing process with pid 4111320 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@967 -- # kill 4111320 00:05:57.461 16:48:03 app_cmdline -- common/autotest_common.sh@972 -- # wait 4111320 00:05:57.720 00:05:57.720 real 0m1.671s 00:05:57.720 user 0m2.021s 00:05:57.720 sys 0m0.400s 00:05:57.720 16:48:04 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.720 16:48:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.720 ************************************ 00:05:57.720 END TEST app_cmdline 00:05:57.720 ************************************ 00:05:57.720 16:48:04 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.720 16:48:04 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:57.720 16:48:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.720 16:48:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.720 16:48:04 -- common/autotest_common.sh@10 -- # set +x 00:05:57.720 ************************************ 00:05:57.720 START TEST version 00:05:57.720 ************************************ 00:05:57.720 16:48:04 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:57.979 * Looking for test storage... 00:05:57.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:57.979 16:48:04 version -- app/version.sh@17 -- # get_header_version major 00:05:57.979 16:48:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:57.979 16:48:04 version -- app/version.sh@14 -- # cut -f2 00:05:57.979 16:48:04 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.979 16:48:04 version -- app/version.sh@17 -- # major=24 00:05:57.979 16:48:04 version -- app/version.sh@18 -- # get_header_version minor 00:05:57.979 16:48:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:57.979 16:48:04 version -- app/version.sh@14 -- # cut -f2 00:05:57.979 16:48:04 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.979 16:48:04 version -- app/version.sh@18 -- # minor=9 00:05:57.979 16:48:04 version -- app/version.sh@19 -- # get_header_version patch 00:05:57.979 16:48:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:57.979 16:48:04 version -- app/version.sh@14 -- # cut -f2 00:05:57.979 16:48:04 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.979 16:48:04 version -- app/version.sh@19 -- # patch=0 00:05:57.979 16:48:04 version -- app/version.sh@20 -- # get_header_version suffix 00:05:57.979 16:48:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:57.979 16:48:04 version -- app/version.sh@14 -- # cut -f2 00:05:57.979 16:48:04 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.979 16:48:04 version -- app/version.sh@20 -- # suffix=-pre 00:05:57.979 16:48:04 version -- app/version.sh@22 -- # version=24.9 00:05:57.979 16:48:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:57.979 16:48:04 version -- app/version.sh@28 -- # version=24.9rc0 00:05:57.979 16:48:04 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:57.979 16:48:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:57.979 16:48:04 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:57.979 16:48:04 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:57.979 00:05:57.979 real 0m0.156s 00:05:57.979 user 0m0.089s 00:05:57.979 sys 0m0.105s 00:05:57.979 16:48:04 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.979 16:48:04 version -- common/autotest_common.sh@10 -- # set +x 00:05:57.979 ************************************ 00:05:57.979 END TEST version 00:05:57.979 ************************************ 00:05:57.979 16:48:04 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.979 16:48:04 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:57.979 16:48:04 -- spdk/autotest.sh@198 -- # uname -s 00:05:57.979 16:48:04 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:57.979 16:48:04 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:57.979 16:48:04 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:57.979 16:48:04 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:57.979 16:48:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:57.979 16:48:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:57.979 16:48:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:57.979 16:48:04 -- common/autotest_common.sh@10 -- # set +x 00:05:57.979 16:48:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:57.979 16:48:04 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:57.979 16:48:04 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:57.979 16:48:04 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:57.979 16:48:04 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:57.979 16:48:04 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:57.979 16:48:04 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:57.979 16:48:04 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:57.979 16:48:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.979 16:48:04 -- common/autotest_common.sh@10 -- # set +x 00:05:57.979 ************************************ 00:05:57.979 START TEST nvmf_tcp 00:05:57.979 ************************************ 00:05:57.979 16:48:04 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:57.979 * Looking for test storage... 00:05:57.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:57.979 16:48:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.238 16:48:04 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.238 16:48:04 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.238 16:48:04 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.238 16:48:04 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.238 16:48:04 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.238 16:48:04 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.238 16:48:04 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:58.238 16:48:04 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:58.238 16:48:04 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:58.239 16:48:04 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:58.239 16:48:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:58.239 16:48:04 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.239 16:48:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.239 16:48:04 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:58.239 16:48:04 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:58.239 16:48:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:58.239 16:48:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.239 16:48:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.239 ************************************ 00:05:58.239 START TEST nvmf_example 00:05:58.239 ************************************ 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:58.239 * Looking for test storage... 00:05:58.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:58.239 16:48:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:03.511 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:03.511 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:03.511 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:03.512 Found net devices under 0000:86:00.0: cvl_0_0 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:03.512 Found net devices under 0000:86:00.1: cvl_0_1 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:03.512 16:48:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:03.512 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:03.512 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:03.512 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:03.512 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:03.512 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:03.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:03.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:06:03.771 00:06:03.771 --- 10.0.0.2 ping statistics --- 00:06:03.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.771 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:03.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:03.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:06:03.771 00:06:03.771 --- 10.0.0.1 ping statistics --- 00:06:03.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.771 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4115316 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4115316 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 4115316 ']' 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:03.771 16:48:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:03.771 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:04.707 16:48:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:04.707 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.944 Initializing NVMe Controllers 00:06:16.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:16.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:16.944 Initialization complete. Launching workers. 00:06:16.944 ======================================================== 00:06:16.944 Latency(us) 00:06:16.944 Device Information : IOPS MiB/s Average min max 00:06:16.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18253.86 71.30 3507.07 707.70 15446.64 00:06:16.944 ======================================================== 00:06:16.944 Total : 18253.86 71.30 3507.07 707.70 15446.64 00:06:16.944 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:16.944 rmmod nvme_tcp 00:06:16.944 rmmod nvme_fabrics 00:06:16.944 rmmod nvme_keyring 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 4115316 ']' 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 4115316 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 4115316 ']' 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 4115316 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4115316 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4115316' 00:06:16.944 killing process with pid 4115316 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 4115316 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 4115316 00:06:16.944 nvmf threads initialize successfully 00:06:16.944 bdev subsystem init successfully 00:06:16.944 created a nvmf target service 00:06:16.944 create targets's poll groups done 00:06:16.944 all subsystems of target started 00:06:16.944 nvmf target is running 00:06:16.944 all subsystems of target stopped 00:06:16.944 destroy targets's poll groups done 00:06:16.944 destroyed the nvmf target service 00:06:16.944 bdev subsystem finish successfully 00:06:16.944 nvmf threads destroy successfully 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:16.944 16:48:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.203 16:48:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:17.203 16:48:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:17.203 16:48:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.203 16:48:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.203 00:06:17.203 real 0m19.113s 00:06:17.203 user 0m45.898s 00:06:17.203 sys 0m5.428s 00:06:17.203 16:48:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.203 16:48:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.203 ************************************ 00:06:17.203 END TEST nvmf_example 00:06:17.203 ************************************ 00:06:17.203 16:48:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:17.203 16:48:23 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:17.203 16:48:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:17.203 16:48:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.203 16:48:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.464 ************************************ 00:06:17.464 START TEST nvmf_filesystem 00:06:17.464 ************************************ 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:17.465 * Looking for test storage... 00:06:17.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:17.465 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:17.465 #define SPDK_CONFIG_H 00:06:17.465 #define SPDK_CONFIG_APPS 1 00:06:17.465 #define SPDK_CONFIG_ARCH native 00:06:17.465 #undef SPDK_CONFIG_ASAN 00:06:17.465 #undef SPDK_CONFIG_AVAHI 00:06:17.465 #undef SPDK_CONFIG_CET 00:06:17.465 #define SPDK_CONFIG_COVERAGE 1 00:06:17.465 #define SPDK_CONFIG_CROSS_PREFIX 00:06:17.465 #undef SPDK_CONFIG_CRYPTO 00:06:17.465 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:17.465 #undef SPDK_CONFIG_CUSTOMOCF 00:06:17.465 #undef SPDK_CONFIG_DAOS 00:06:17.465 #define SPDK_CONFIG_DAOS_DIR 00:06:17.465 #define SPDK_CONFIG_DEBUG 1 00:06:17.465 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:17.465 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:17.465 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:17.465 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:17.465 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:17.465 #undef SPDK_CONFIG_DPDK_UADK 00:06:17.465 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:17.465 #define SPDK_CONFIG_EXAMPLES 1 00:06:17.465 #undef SPDK_CONFIG_FC 00:06:17.465 #define SPDK_CONFIG_FC_PATH 00:06:17.465 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:17.466 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:17.466 #undef SPDK_CONFIG_FUSE 00:06:17.466 #undef SPDK_CONFIG_FUZZER 00:06:17.466 #define SPDK_CONFIG_FUZZER_LIB 00:06:17.466 #undef SPDK_CONFIG_GOLANG 00:06:17.466 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:17.466 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:17.466 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:17.466 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:17.466 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:17.466 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:17.466 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:17.466 #define SPDK_CONFIG_IDXD 1 00:06:17.466 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:17.466 #undef SPDK_CONFIG_IPSEC_MB 00:06:17.466 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:17.466 #define SPDK_CONFIG_ISAL 1 00:06:17.466 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:17.466 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:17.466 #define SPDK_CONFIG_LIBDIR 00:06:17.466 #undef SPDK_CONFIG_LTO 00:06:17.466 #define SPDK_CONFIG_MAX_LCORES 128 00:06:17.466 #define SPDK_CONFIG_NVME_CUSE 1 00:06:17.466 #undef SPDK_CONFIG_OCF 00:06:17.466 #define SPDK_CONFIG_OCF_PATH 00:06:17.466 #define SPDK_CONFIG_OPENSSL_PATH 00:06:17.466 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:17.466 #define SPDK_CONFIG_PGO_DIR 00:06:17.466 #undef SPDK_CONFIG_PGO_USE 00:06:17.466 #define SPDK_CONFIG_PREFIX /usr/local 00:06:17.466 #undef SPDK_CONFIG_RAID5F 00:06:17.466 #undef SPDK_CONFIG_RBD 00:06:17.466 #define SPDK_CONFIG_RDMA 1 00:06:17.466 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:17.466 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:17.466 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:17.466 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:17.466 #define SPDK_CONFIG_SHARED 1 00:06:17.466 #undef SPDK_CONFIG_SMA 00:06:17.466 #define SPDK_CONFIG_TESTS 1 00:06:17.466 #undef SPDK_CONFIG_TSAN 00:06:17.466 #define SPDK_CONFIG_UBLK 1 00:06:17.466 #define SPDK_CONFIG_UBSAN 1 00:06:17.466 #undef SPDK_CONFIG_UNIT_TESTS 00:06:17.466 #undef SPDK_CONFIG_URING 00:06:17.466 #define SPDK_CONFIG_URING_PATH 00:06:17.466 #undef SPDK_CONFIG_URING_ZNS 00:06:17.466 #undef SPDK_CONFIG_USDT 00:06:17.466 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:17.466 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:17.466 #define SPDK_CONFIG_VFIO_USER 1 00:06:17.466 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:17.466 #define SPDK_CONFIG_VHOST 1 00:06:17.466 #define SPDK_CONFIG_VIRTIO 1 00:06:17.466 #undef SPDK_CONFIG_VTUNE 00:06:17.466 #define SPDK_CONFIG_VTUNE_DIR 00:06:17.466 #define SPDK_CONFIG_WERROR 1 00:06:17.466 #define SPDK_CONFIG_WPDK_DIR 00:06:17.466 #undef SPDK_CONFIG_XNVME 00:06:17.466 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:17.466 16:48:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:17.466 16:48:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.466 16:48:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.466 16:48:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.466 16:48:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.466 16:48:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.466 16:48:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.466 16:48:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.466 16:48:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:17.466 16:48:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.466 16:48:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:17.466 16:48:23 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:17.466 16:48:23 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:17.466 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:17.467 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 4117743 ]] 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 4117743 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.8oZ93R 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.8oZ93R/tests/target /tmp/spdk.8oZ93R 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=189584465920 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6389833728 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983774720 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986441216 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=708608 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:17.468 * Looking for test storage... 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=189584465920 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8604426240 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:17.468 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:17.469 16:48:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:22.744 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:22.744 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:22.744 Found net devices under 0000:86:00.0: cvl_0_0 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:22.744 Found net devices under 0000:86:00.1: cvl_0_1 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.744 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:22.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:06:22.744 00:06:22.744 --- 10.0.0.2 ping statistics --- 00:06:22.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.745 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:06:22.745 16:48:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:06:22.745 00:06:22.745 --- 10.0.0.1 ping statistics --- 00:06:22.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.745 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.745 ************************************ 00:06:22.745 START TEST nvmf_filesystem_no_in_capsule 00:06:22.745 ************************************ 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4120644 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4120644 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 4120644 ']' 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.745 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:22.745 [2024-07-15 16:48:29.119171] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:06:22.745 [2024-07-15 16:48:29.119216] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.745 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.745 [2024-07-15 16:48:29.175374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.745 [2024-07-15 16:48:29.262679] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.745 [2024-07-15 16:48:29.262713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.745 [2024-07-15 16:48:29.262720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.745 [2024-07-15 16:48:29.262726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.745 [2024-07-15 16:48:29.262731] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.745 [2024-07-15 16:48:29.262770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.745 [2024-07-15 16:48:29.262786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.745 [2024-07-15 16:48:29.262876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.745 [2024-07-15 16:48:29.262877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:23.337 [2024-07-15 16:48:29.973137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.337 16:48:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:23.597 Malloc1 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:23.598 [2024-07-15 16:48:30.123332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:23.598 { 00:06:23.598 "name": "Malloc1", 00:06:23.598 "aliases": [ 00:06:23.598 "7b0d81e9-832f-4c7b-997e-dd6d4bae0b06" 00:06:23.598 ], 00:06:23.598 "product_name": "Malloc disk", 00:06:23.598 "block_size": 512, 00:06:23.598 "num_blocks": 1048576, 00:06:23.598 "uuid": "7b0d81e9-832f-4c7b-997e-dd6d4bae0b06", 00:06:23.598 "assigned_rate_limits": { 00:06:23.598 "rw_ios_per_sec": 0, 00:06:23.598 "rw_mbytes_per_sec": 0, 00:06:23.598 "r_mbytes_per_sec": 0, 00:06:23.598 "w_mbytes_per_sec": 0 00:06:23.598 }, 00:06:23.598 "claimed": true, 00:06:23.598 "claim_type": "exclusive_write", 00:06:23.598 "zoned": false, 00:06:23.598 "supported_io_types": { 00:06:23.598 "read": true, 00:06:23.598 "write": true, 00:06:23.598 "unmap": true, 00:06:23.598 "flush": true, 00:06:23.598 "reset": true, 00:06:23.598 "nvme_admin": false, 00:06:23.598 "nvme_io": false, 00:06:23.598 "nvme_io_md": false, 00:06:23.598 "write_zeroes": true, 00:06:23.598 "zcopy": true, 00:06:23.598 "get_zone_info": false, 00:06:23.598 "zone_management": false, 00:06:23.598 "zone_append": false, 00:06:23.598 "compare": false, 00:06:23.598 "compare_and_write": false, 00:06:23.598 "abort": true, 00:06:23.598 "seek_hole": false, 00:06:23.598 "seek_data": false, 00:06:23.598 "copy": true, 00:06:23.598 "nvme_iov_md": false 00:06:23.598 }, 00:06:23.598 "memory_domains": [ 00:06:23.598 { 00:06:23.598 "dma_device_id": "system", 00:06:23.598 "dma_device_type": 1 00:06:23.598 }, 00:06:23.598 { 00:06:23.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.598 "dma_device_type": 2 00:06:23.598 } 00:06:23.598 ], 00:06:23.598 "driver_specific": {} 00:06:23.598 } 00:06:23.598 ]' 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:23.598 16:48:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:24.975 16:48:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:24.975 16:48:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:24.975 16:48:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:24.975 16:48:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:24.975 16:48:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:26.877 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:27.136 16:48:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:28.073 16:48:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.015 ************************************ 00:06:29.015 START TEST filesystem_ext4 00:06:29.015 ************************************ 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:29.015 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:29.015 mke2fs 1.46.5 (30-Dec-2021) 00:06:29.015 Discarding device blocks: 0/522240 done 00:06:29.015 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:29.015 Filesystem UUID: 8e6c8078-1b48-4e91-a84b-103d948ae5a9 00:06:29.015 Superblock backups stored on blocks: 00:06:29.015 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:29.015 00:06:29.015 Allocating group tables: 0/64 done 00:06:29.015 Writing inode tables: 0/64 done 00:06:29.274 Creating journal (8192 blocks): done 00:06:29.274 Writing superblocks and filesystem accounting information: 0/64 done 00:06:29.274 00:06:29.274 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:29.274 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:29.274 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:29.274 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:29.274 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:29.274 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:29.533 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:29.533 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:29.533 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4120644 00:06:29.533 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:29.533 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:29.533 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:29.533 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:29.533 00:06:29.533 real 0m0.461s 00:06:29.533 user 0m0.023s 00:06:29.533 sys 0m0.064s 00:06:29.533 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.533 16:48:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:29.533 ************************************ 00:06:29.533 END TEST filesystem_ext4 00:06:29.533 ************************************ 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.533 ************************************ 00:06:29.533 START TEST filesystem_btrfs 00:06:29.533 ************************************ 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:29.533 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:30.100 btrfs-progs v6.6.2 00:06:30.100 See https://btrfs.readthedocs.io for more information. 00:06:30.100 00:06:30.100 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:30.100 NOTE: several default settings have changed in version 5.15, please make sure 00:06:30.100 this does not affect your deployments: 00:06:30.100 - DUP for metadata (-m dup) 00:06:30.100 - enabled no-holes (-O no-holes) 00:06:30.100 - enabled free-space-tree (-R free-space-tree) 00:06:30.100 00:06:30.100 Label: (null) 00:06:30.100 UUID: 59ac2eaa-77e4-43d3-a523-50466d1353c2 00:06:30.100 Node size: 16384 00:06:30.100 Sector size: 4096 00:06:30.100 Filesystem size: 510.00MiB 00:06:30.100 Block group profiles: 00:06:30.100 Data: single 8.00MiB 00:06:30.100 Metadata: DUP 32.00MiB 00:06:30.100 System: DUP 8.00MiB 00:06:30.100 SSD detected: yes 00:06:30.100 Zoned device: no 00:06:30.100 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:30.100 Runtime features: free-space-tree 00:06:30.100 Checksum: crc32c 00:06:30.100 Number of devices: 1 00:06:30.100 Devices: 00:06:30.100 ID SIZE PATH 00:06:30.100 1 510.00MiB /dev/nvme0n1p1 00:06:30.100 00:06:30.100 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:30.100 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4120644 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:30.359 00:06:30.359 real 0m0.801s 00:06:30.359 user 0m0.028s 00:06:30.359 sys 0m0.118s 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:30.359 ************************************ 00:06:30.359 END TEST filesystem_btrfs 00:06:30.359 ************************************ 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:30.359 ************************************ 00:06:30.359 START TEST filesystem_xfs 00:06:30.359 ************************************ 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:30.359 16:48:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:30.359 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:30.359 = sectsz=512 attr=2, projid32bit=1 00:06:30.359 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:30.359 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:30.359 data = bsize=4096 blocks=130560, imaxpct=25 00:06:30.359 = sunit=0 swidth=0 blks 00:06:30.359 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:30.359 log =internal log bsize=4096 blocks=16384, version=2 00:06:30.359 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:30.359 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:31.295 Discarding blocks...Done. 00:06:31.295 16:48:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:31.295 16:48:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4120644 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:33.832 00:06:33.832 real 0m3.165s 00:06:33.832 user 0m0.024s 00:06:33.832 sys 0m0.071s 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:33.832 ************************************ 00:06:33.832 END TEST filesystem_xfs 00:06:33.832 ************************************ 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:33.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.832 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:33.833 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.833 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:33.833 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4120644 00:06:33.833 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 4120644 ']' 00:06:33.833 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 4120644 00:06:33.833 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:33.833 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.833 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4120644 00:06:33.833 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.833 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.833 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4120644' 00:06:33.833 killing process with pid 4120644 00:06:33.833 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 4120644 00:06:33.833 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 4120644 00:06:34.091 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:34.091 00:06:34.091 real 0m11.687s 00:06:34.091 user 0m45.877s 00:06:34.091 sys 0m1.199s 00:06:34.091 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.091 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.091 ************************************ 00:06:34.091 END TEST nvmf_filesystem_no_in_capsule 00:06:34.091 ************************************ 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.350 ************************************ 00:06:34.350 START TEST nvmf_filesystem_in_capsule 00:06:34.350 ************************************ 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4122826 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4122826 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 4122826 ']' 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.350 16:48:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.350 [2024-07-15 16:48:40.882145] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:06:34.350 [2024-07-15 16:48:40.882189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.350 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.350 [2024-07-15 16:48:40.940842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.351 [2024-07-15 16:48:41.010668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.351 [2024-07-15 16:48:41.010710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.351 [2024-07-15 16:48:41.010718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.351 [2024-07-15 16:48:41.010723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.351 [2024-07-15 16:48:41.010728] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.351 [2024-07-15 16:48:41.010776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.351 [2024-07-15 16:48:41.010875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.351 [2024-07-15 16:48:41.010938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.351 [2024-07-15 16:48:41.010939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.287 [2024-07-15 16:48:41.735235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.287 Malloc1 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.287 [2024-07-15 16:48:41.883936] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.287 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:35.287 { 00:06:35.287 "name": "Malloc1", 00:06:35.287 "aliases": [ 00:06:35.287 "14111197-64ee-4db9-9ae7-b911f34beb17" 00:06:35.287 ], 00:06:35.287 "product_name": "Malloc disk", 00:06:35.287 "block_size": 512, 00:06:35.287 "num_blocks": 1048576, 00:06:35.287 "uuid": "14111197-64ee-4db9-9ae7-b911f34beb17", 00:06:35.287 "assigned_rate_limits": { 00:06:35.287 "rw_ios_per_sec": 0, 00:06:35.287 "rw_mbytes_per_sec": 0, 00:06:35.287 "r_mbytes_per_sec": 0, 00:06:35.287 "w_mbytes_per_sec": 0 00:06:35.287 }, 00:06:35.287 "claimed": true, 00:06:35.287 "claim_type": "exclusive_write", 00:06:35.287 "zoned": false, 00:06:35.287 "supported_io_types": { 00:06:35.287 "read": true, 00:06:35.287 "write": true, 00:06:35.287 "unmap": true, 00:06:35.287 "flush": true, 00:06:35.287 "reset": true, 00:06:35.287 "nvme_admin": false, 00:06:35.287 "nvme_io": false, 00:06:35.287 "nvme_io_md": false, 00:06:35.287 "write_zeroes": true, 00:06:35.287 "zcopy": true, 00:06:35.287 "get_zone_info": false, 00:06:35.287 "zone_management": false, 00:06:35.287 "zone_append": false, 00:06:35.287 "compare": false, 00:06:35.287 "compare_and_write": false, 00:06:35.287 "abort": true, 00:06:35.287 "seek_hole": false, 00:06:35.287 "seek_data": false, 00:06:35.287 "copy": true, 00:06:35.288 "nvme_iov_md": false 00:06:35.288 }, 00:06:35.288 "memory_domains": [ 00:06:35.288 { 00:06:35.288 "dma_device_id": "system", 00:06:35.288 "dma_device_type": 1 00:06:35.288 }, 00:06:35.288 { 00:06:35.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.288 "dma_device_type": 2 00:06:35.288 } 00:06:35.288 ], 00:06:35.288 "driver_specific": {} 00:06:35.288 } 00:06:35.288 ]' 00:06:35.288 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:35.288 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:35.547 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:35.547 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:35.547 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:35.547 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:35.547 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:35.547 16:48:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:36.545 16:48:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:36.545 16:48:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:36.545 16:48:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:36.545 16:48:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:36.545 16:48:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:38.447 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:38.447 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:38.447 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:38.447 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:38.447 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:38.448 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:38.448 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:38.448 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:38.448 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:38.448 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:38.448 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:38.448 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:38.448 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:38.448 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:38.448 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:38.706 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:38.706 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:38.706 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:39.273 16:48:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.207 ************************************ 00:06:40.207 START TEST filesystem_in_capsule_ext4 00:06:40.207 ************************************ 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:40.207 16:48:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:40.207 mke2fs 1.46.5 (30-Dec-2021) 00:06:40.207 Discarding device blocks: 0/522240 done 00:06:40.207 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:40.207 Filesystem UUID: 181a9aa4-055f-465d-805a-aa85883e555e 00:06:40.207 Superblock backups stored on blocks: 00:06:40.207 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:40.207 00:06:40.207 Allocating group tables: 0/64 done 00:06:40.207 Writing inode tables: 0/64 done 00:06:40.465 Creating journal (8192 blocks): done 00:06:41.399 Writing superblocks and filesystem accounting information: 0/64 done 00:06:41.399 00:06:41.399 16:48:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:41.399 16:48:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4122826 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:42.334 00:06:42.334 real 0m2.096s 00:06:42.334 user 0m0.037s 00:06:42.334 sys 0m0.056s 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:42.334 ************************************ 00:06:42.334 END TEST filesystem_in_capsule_ext4 00:06:42.334 ************************************ 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:42.334 ************************************ 00:06:42.334 START TEST filesystem_in_capsule_btrfs 00:06:42.334 ************************************ 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:42.334 16:48:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:42.593 btrfs-progs v6.6.2 00:06:42.593 See https://btrfs.readthedocs.io for more information. 00:06:42.593 00:06:42.593 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:42.593 NOTE: several default settings have changed in version 5.15, please make sure 00:06:42.593 this does not affect your deployments: 00:06:42.593 - DUP for metadata (-m dup) 00:06:42.593 - enabled no-holes (-O no-holes) 00:06:42.593 - enabled free-space-tree (-R free-space-tree) 00:06:42.593 00:06:42.593 Label: (null) 00:06:42.593 UUID: a7a9d1ba-75c0-4b47-bf04-6f051d83cf9a 00:06:42.593 Node size: 16384 00:06:42.593 Sector size: 4096 00:06:42.593 Filesystem size: 510.00MiB 00:06:42.593 Block group profiles: 00:06:42.593 Data: single 8.00MiB 00:06:42.593 Metadata: DUP 32.00MiB 00:06:42.593 System: DUP 8.00MiB 00:06:42.593 SSD detected: yes 00:06:42.593 Zoned device: no 00:06:42.593 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:42.593 Runtime features: free-space-tree 00:06:42.593 Checksum: crc32c 00:06:42.593 Number of devices: 1 00:06:42.593 Devices: 00:06:42.593 ID SIZE PATH 00:06:42.593 1 510.00MiB /dev/nvme0n1p1 00:06:42.593 00:06:42.593 16:48:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:42.593 16:48:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:43.528 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:43.528 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:43.528 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:43.528 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:43.528 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:43.528 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:43.528 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4122826 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:43.529 00:06:43.529 real 0m1.220s 00:06:43.529 user 0m0.040s 00:06:43.529 sys 0m0.113s 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:43.529 ************************************ 00:06:43.529 END TEST filesystem_in_capsule_btrfs 00:06:43.529 ************************************ 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.529 ************************************ 00:06:43.529 START TEST filesystem_in_capsule_xfs 00:06:43.529 ************************************ 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:43.529 16:48:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:43.787 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:43.787 = sectsz=512 attr=2, projid32bit=1 00:06:43.787 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:43.787 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:43.787 data = bsize=4096 blocks=130560, imaxpct=25 00:06:43.787 = sunit=0 swidth=0 blks 00:06:43.787 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:43.787 log =internal log bsize=4096 blocks=16384, version=2 00:06:43.787 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:43.787 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:44.722 Discarding blocks...Done. 00:06:44.722 16:48:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:44.722 16:48:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4122826 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:47.267 00:06:47.267 real 0m3.505s 00:06:47.267 user 0m0.024s 00:06:47.267 sys 0m0.072s 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:47.267 ************************************ 00:06:47.267 END TEST filesystem_in_capsule_xfs 00:06:47.267 ************************************ 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:47.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.267 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.525 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.525 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:47.525 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4122826 00:06:47.525 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 4122826 ']' 00:06:47.525 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 4122826 00:06:47.525 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:47.525 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.525 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4122826 00:06:47.525 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.525 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.525 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4122826' 00:06:47.525 killing process with pid 4122826 00:06:47.525 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 4122826 00:06:47.525 16:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 4122826 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:47.784 00:06:47.784 real 0m13.507s 00:06:47.784 user 0m53.162s 00:06:47.784 sys 0m1.227s 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.784 ************************************ 00:06:47.784 END TEST nvmf_filesystem_in_capsule 00:06:47.784 ************************************ 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:47.784 rmmod nvme_tcp 00:06:47.784 rmmod nvme_fabrics 00:06:47.784 rmmod nvme_keyring 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.784 16:48:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.348 16:48:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:50.348 00:06:50.348 real 0m32.602s 00:06:50.348 user 1m40.421s 00:06:50.348 sys 0m6.297s 00:06:50.348 16:48:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.348 16:48:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.348 ************************************ 00:06:50.348 END TEST nvmf_filesystem 00:06:50.348 ************************************ 00:06:50.348 16:48:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:50.349 16:48:56 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:50.349 16:48:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:50.349 16:48:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.349 16:48:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.349 ************************************ 00:06:50.349 START TEST nvmf_target_discovery 00:06:50.349 ************************************ 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:50.349 * Looking for test storage... 00:06:50.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:50.349 16:48:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.537 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:54.538 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:54.538 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:54.538 Found net devices under 0000:86:00.0: cvl_0_0 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:54.538 Found net devices under 0000:86:00.1: cvl_0_1 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.538 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:54.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:06:54.798 00:06:54.798 --- 10.0.0.2 ping statistics --- 00:06:54.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.798 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:06:54.798 00:06:54.798 --- 10.0.0.1 ping statistics --- 00:06:54.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.798 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=4128633 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 4128633 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 4128633 ']' 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.798 16:49:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:54.798 [2024-07-15 16:49:01.454978] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:06:54.798 [2024-07-15 16:49:01.455023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.057 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.057 [2024-07-15 16:49:01.511284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.057 [2024-07-15 16:49:01.591979] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.057 [2024-07-15 16:49:01.592014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.057 [2024-07-15 16:49:01.592021] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.057 [2024-07-15 16:49:01.592027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.057 [2024-07-15 16:49:01.592032] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.057 [2024-07-15 16:49:01.592072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.057 [2024-07-15 16:49:01.592167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.057 [2024-07-15 16:49:01.592254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.057 [2024-07-15 16:49:01.592255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.625 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.625 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:55.625 16:49:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:55.625 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.625 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 [2024-07-15 16:49:02.311139] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 Null1 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 [2024-07-15 16:49:02.356646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 Null2 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 Null3 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 Null4 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.923 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.924 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:06:56.183 00:06:56.183 Discovery Log Number of Records 6, Generation counter 6 00:06:56.183 =====Discovery Log Entry 0====== 00:06:56.183 trtype: tcp 00:06:56.183 adrfam: ipv4 00:06:56.183 subtype: current discovery subsystem 00:06:56.183 treq: not required 00:06:56.183 portid: 0 00:06:56.183 trsvcid: 4420 00:06:56.183 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:56.183 traddr: 10.0.0.2 00:06:56.183 eflags: explicit discovery connections, duplicate discovery information 00:06:56.183 sectype: none 00:06:56.183 =====Discovery Log Entry 1====== 00:06:56.183 trtype: tcp 00:06:56.183 adrfam: ipv4 00:06:56.183 subtype: nvme subsystem 00:06:56.183 treq: not required 00:06:56.183 portid: 0 00:06:56.183 trsvcid: 4420 00:06:56.183 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:56.183 traddr: 10.0.0.2 00:06:56.183 eflags: none 00:06:56.183 sectype: none 00:06:56.183 =====Discovery Log Entry 2====== 00:06:56.183 trtype: tcp 00:06:56.183 adrfam: ipv4 00:06:56.183 subtype: nvme subsystem 00:06:56.183 treq: not required 00:06:56.183 portid: 0 00:06:56.183 trsvcid: 4420 00:06:56.183 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:56.183 traddr: 10.0.0.2 00:06:56.183 eflags: none 00:06:56.183 sectype: none 00:06:56.183 =====Discovery Log Entry 3====== 00:06:56.183 trtype: tcp 00:06:56.183 adrfam: ipv4 00:06:56.183 subtype: nvme subsystem 00:06:56.183 treq: not required 00:06:56.183 portid: 0 00:06:56.183 trsvcid: 4420 00:06:56.183 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:56.183 traddr: 10.0.0.2 00:06:56.183 eflags: none 00:06:56.183 sectype: none 00:06:56.183 =====Discovery Log Entry 4====== 00:06:56.183 trtype: tcp 00:06:56.183 adrfam: ipv4 00:06:56.183 subtype: nvme subsystem 00:06:56.183 treq: not required 00:06:56.183 portid: 0 00:06:56.183 trsvcid: 4420 00:06:56.183 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:56.183 traddr: 10.0.0.2 00:06:56.183 eflags: none 00:06:56.183 sectype: none 00:06:56.183 =====Discovery Log Entry 5====== 00:06:56.183 trtype: tcp 00:06:56.183 adrfam: ipv4 00:06:56.183 subtype: discovery subsystem referral 00:06:56.183 treq: not required 00:06:56.183 portid: 0 00:06:56.183 trsvcid: 4430 00:06:56.183 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:56.183 traddr: 10.0.0.2 00:06:56.183 eflags: none 00:06:56.183 sectype: none 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:56.183 Perform nvmf subsystem discovery via RPC 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.183 [ 00:06:56.183 { 00:06:56.183 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:56.183 "subtype": "Discovery", 00:06:56.183 "listen_addresses": [ 00:06:56.183 { 00:06:56.183 "trtype": "TCP", 00:06:56.183 "adrfam": "IPv4", 00:06:56.183 "traddr": "10.0.0.2", 00:06:56.183 "trsvcid": "4420" 00:06:56.183 } 00:06:56.183 ], 00:06:56.183 "allow_any_host": true, 00:06:56.183 "hosts": [] 00:06:56.183 }, 00:06:56.183 { 00:06:56.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:56.183 "subtype": "NVMe", 00:06:56.183 "listen_addresses": [ 00:06:56.183 { 00:06:56.183 "trtype": "TCP", 00:06:56.183 "adrfam": "IPv4", 00:06:56.183 "traddr": "10.0.0.2", 00:06:56.183 "trsvcid": "4420" 00:06:56.183 } 00:06:56.183 ], 00:06:56.183 "allow_any_host": true, 00:06:56.183 "hosts": [], 00:06:56.183 "serial_number": "SPDK00000000000001", 00:06:56.183 "model_number": "SPDK bdev Controller", 00:06:56.183 "max_namespaces": 32, 00:06:56.183 "min_cntlid": 1, 00:06:56.183 "max_cntlid": 65519, 00:06:56.183 "namespaces": [ 00:06:56.183 { 00:06:56.183 "nsid": 1, 00:06:56.183 "bdev_name": "Null1", 00:06:56.183 "name": "Null1", 00:06:56.183 "nguid": "1E51FB4B46C3474D920162BAF0664D3C", 00:06:56.183 "uuid": "1e51fb4b-46c3-474d-9201-62baf0664d3c" 00:06:56.183 } 00:06:56.183 ] 00:06:56.183 }, 00:06:56.183 { 00:06:56.183 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:56.183 "subtype": "NVMe", 00:06:56.183 "listen_addresses": [ 00:06:56.183 { 00:06:56.183 "trtype": "TCP", 00:06:56.183 "adrfam": "IPv4", 00:06:56.183 "traddr": "10.0.0.2", 00:06:56.183 "trsvcid": "4420" 00:06:56.183 } 00:06:56.183 ], 00:06:56.183 "allow_any_host": true, 00:06:56.183 "hosts": [], 00:06:56.183 "serial_number": "SPDK00000000000002", 00:06:56.183 "model_number": "SPDK bdev Controller", 00:06:56.183 "max_namespaces": 32, 00:06:56.183 "min_cntlid": 1, 00:06:56.183 "max_cntlid": 65519, 00:06:56.183 "namespaces": [ 00:06:56.183 { 00:06:56.183 "nsid": 1, 00:06:56.183 "bdev_name": "Null2", 00:06:56.183 "name": "Null2", 00:06:56.183 "nguid": "D1BC2FB3E2DF4770B3DE498F33C6536F", 00:06:56.183 "uuid": "d1bc2fb3-e2df-4770-b3de-498f33c6536f" 00:06:56.183 } 00:06:56.183 ] 00:06:56.183 }, 00:06:56.183 { 00:06:56.183 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:56.183 "subtype": "NVMe", 00:06:56.183 "listen_addresses": [ 00:06:56.183 { 00:06:56.183 "trtype": "TCP", 00:06:56.183 "adrfam": "IPv4", 00:06:56.183 "traddr": "10.0.0.2", 00:06:56.183 "trsvcid": "4420" 00:06:56.183 } 00:06:56.183 ], 00:06:56.183 "allow_any_host": true, 00:06:56.183 "hosts": [], 00:06:56.183 "serial_number": "SPDK00000000000003", 00:06:56.183 "model_number": "SPDK bdev Controller", 00:06:56.183 "max_namespaces": 32, 00:06:56.183 "min_cntlid": 1, 00:06:56.183 "max_cntlid": 65519, 00:06:56.183 "namespaces": [ 00:06:56.183 { 00:06:56.183 "nsid": 1, 00:06:56.183 "bdev_name": "Null3", 00:06:56.183 "name": "Null3", 00:06:56.183 "nguid": "B754B0F1E8FE43F1833BFA01DFBEBD04", 00:06:56.183 "uuid": "b754b0f1-e8fe-43f1-833b-fa01dfbebd04" 00:06:56.183 } 00:06:56.183 ] 00:06:56.183 }, 00:06:56.183 { 00:06:56.183 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:56.183 "subtype": "NVMe", 00:06:56.183 "listen_addresses": [ 00:06:56.183 { 00:06:56.183 "trtype": "TCP", 00:06:56.183 "adrfam": "IPv4", 00:06:56.183 "traddr": "10.0.0.2", 00:06:56.183 "trsvcid": "4420" 00:06:56.183 } 00:06:56.183 ], 00:06:56.183 "allow_any_host": true, 00:06:56.183 "hosts": [], 00:06:56.183 "serial_number": "SPDK00000000000004", 00:06:56.183 "model_number": "SPDK bdev Controller", 00:06:56.183 "max_namespaces": 32, 00:06:56.183 "min_cntlid": 1, 00:06:56.183 "max_cntlid": 65519, 00:06:56.183 "namespaces": [ 00:06:56.183 { 00:06:56.183 "nsid": 1, 00:06:56.183 "bdev_name": "Null4", 00:06:56.183 "name": "Null4", 00:06:56.183 "nguid": "A7B8356F00064755A3946B377706AA24", 00:06:56.183 "uuid": "a7b8356f-0006-4755-a394-6b377706aa24" 00:06:56.183 } 00:06:56.183 ] 00:06:56.183 } 00:06:56.183 ] 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.183 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:56.184 16:49:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:56.184 rmmod nvme_tcp 00:06:56.184 rmmod nvme_fabrics 00:06:56.184 rmmod nvme_keyring 00:06:56.442 16:49:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:56.442 16:49:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:56.442 16:49:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:56.442 16:49:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 4128633 ']' 00:06:56.442 16:49:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 4128633 00:06:56.442 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 4128633 ']' 00:06:56.442 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 4128633 00:06:56.442 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:56.442 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.442 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4128633 00:06:56.442 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.442 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.442 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4128633' 00:06:56.443 killing process with pid 4128633 00:06:56.443 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 4128633 00:06:56.443 16:49:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 4128633 00:06:56.443 16:49:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:56.443 16:49:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:56.443 16:49:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:56.443 16:49:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:56.443 16:49:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:56.443 16:49:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.443 16:49:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:56.443 16:49:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.976 16:49:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:58.976 00:06:58.976 real 0m8.588s 00:06:58.976 user 0m7.298s 00:06:58.976 sys 0m3.913s 00:06:58.976 16:49:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.976 16:49:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.976 ************************************ 00:06:58.976 END TEST nvmf_target_discovery 00:06:58.976 ************************************ 00:06:58.976 16:49:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:58.976 16:49:05 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:58.976 16:49:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:58.976 16:49:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.976 16:49:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.976 ************************************ 00:06:58.976 START TEST nvmf_referrals 00:06:58.976 ************************************ 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:58.976 * Looking for test storage... 00:06:58.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:58.976 16:49:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:04.249 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:04.250 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:04.250 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:04.250 Found net devices under 0000:86:00.0: cvl_0_0 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:04.250 Found net devices under 0000:86:00.1: cvl_0_1 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:04.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:07:04.250 00:07:04.250 --- 10.0.0.2 ping statistics --- 00:07:04.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.250 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:07:04.250 00:07:04.250 --- 10.0.0.1 ping statistics --- 00:07:04.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.250 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=4132198 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 4132198 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 4132198 ']' 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:04.250 16:49:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.250 [2024-07-15 16:49:10.514827] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:07:04.250 [2024-07-15 16:49:10.514867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.250 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.250 [2024-07-15 16:49:10.570392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.250 [2024-07-15 16:49:10.650834] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.250 [2024-07-15 16:49:10.650870] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.250 [2024-07-15 16:49:10.650877] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.250 [2024-07-15 16:49:10.650883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.250 [2024-07-15 16:49:10.650888] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.250 [2024-07-15 16:49:10.650937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.251 [2024-07-15 16:49:10.651033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.251 [2024-07-15 16:49:10.651124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.251 [2024-07-15 16:49:10.651125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.818 [2024-07-15 16:49:11.354379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.818 [2024-07-15 16:49:11.363795] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:04.818 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.077 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:05.335 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:05.336 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.336 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:05.336 16:49:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.595 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.855 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:06.113 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:06.113 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:06.113 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:06.114 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:06.373 rmmod nvme_tcp 00:07:06.373 rmmod nvme_fabrics 00:07:06.373 rmmod nvme_keyring 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 4132198 ']' 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 4132198 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 4132198 ']' 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 4132198 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4132198 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4132198' 00:07:06.373 killing process with pid 4132198 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 4132198 00:07:06.373 16:49:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 4132198 00:07:06.632 16:49:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:06.632 16:49:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:06.632 16:49:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:06.632 16:49:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:06.632 16:49:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:06.632 16:49:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.632 16:49:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.632 16:49:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.168 16:49:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:09.168 00:07:09.168 real 0m9.991s 00:07:09.168 user 0m12.309s 00:07:09.168 sys 0m4.434s 00:07:09.168 16:49:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.168 16:49:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.168 ************************************ 00:07:09.168 END TEST nvmf_referrals 00:07:09.168 ************************************ 00:07:09.168 16:49:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:09.168 16:49:15 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:09.168 16:49:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:09.168 16:49:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.168 16:49:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:09.168 ************************************ 00:07:09.168 START TEST nvmf_connect_disconnect 00:07:09.168 ************************************ 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:09.168 * Looking for test storage... 00:07:09.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:09.168 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:09.169 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.169 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:09.169 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:09.169 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:09.169 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.169 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.169 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.169 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:09.169 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:09.169 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:09.169 16:49:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:13.358 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:13.358 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:13.358 Found net devices under 0000:86:00.0: cvl_0_0 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:13.358 Found net devices under 0000:86:00.1: cvl_0_1 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.358 16:49:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:13.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:07:13.618 00:07:13.618 --- 10.0.0.2 ping statistics --- 00:07:13.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.618 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:07:13.618 00:07:13.618 --- 10.0.0.1 ping statistics --- 00:07:13.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.618 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=4136054 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 4136054 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 4136054 ']' 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.618 16:49:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.877 [2024-07-15 16:49:20.312542] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:07:13.877 [2024-07-15 16:49:20.312584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.877 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.877 [2024-07-15 16:49:20.370158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.877 [2024-07-15 16:49:20.445168] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.877 [2024-07-15 16:49:20.445210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.877 [2024-07-15 16:49:20.445217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.877 [2024-07-15 16:49:20.445222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.877 [2024-07-15 16:49:20.445248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.877 [2024-07-15 16:49:20.445298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.877 [2024-07-15 16:49:20.445395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.877 [2024-07-15 16:49:20.445481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.877 [2024-07-15 16:49:20.445482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:14.813 [2024-07-15 16:49:21.157286] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:14.813 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.814 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:14.814 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.814 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:14.814 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.814 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.814 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.814 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:14.814 [2024-07-15 16:49:21.209176] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.814 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.814 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:14.814 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:14.814 16:49:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:18.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.380 rmmod nvme_tcp 00:07:31.380 rmmod nvme_fabrics 00:07:31.380 rmmod nvme_keyring 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 4136054 ']' 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 4136054 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 4136054 ']' 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 4136054 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4136054 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4136054' 00:07:31.380 killing process with pid 4136054 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 4136054 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 4136054 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.380 16:49:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.285 16:49:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:33.285 00:07:33.285 real 0m24.484s 00:07:33.285 user 1m9.906s 00:07:33.285 sys 0m4.886s 00:07:33.285 16:49:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.285 16:49:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:33.285 ************************************ 00:07:33.285 END TEST nvmf_connect_disconnect 00:07:33.285 ************************************ 00:07:33.285 16:49:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:33.285 16:49:39 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:33.285 16:49:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.285 16:49:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.285 16:49:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.285 ************************************ 00:07:33.285 START TEST nvmf_multitarget 00:07:33.285 ************************************ 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:33.285 * Looking for test storage... 00:07:33.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.285 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:33.544 16:49:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:38.815 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.815 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:38.815 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:38.815 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:38.815 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:38.815 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:38.815 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:38.815 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:38.815 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:38.815 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:38.815 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:38.815 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:38.815 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:38.816 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:38.816 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:38.816 Found net devices under 0000:86:00.0: cvl_0_0 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:38.816 Found net devices under 0000:86:00.1: cvl_0_1 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:38.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:07:38.816 00:07:38.816 --- 10.0.0.2 ping statistics --- 00:07:38.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.816 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:07:38.816 00:07:38.816 --- 10.0.0.1 ping statistics --- 00:07:38.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.816 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=4142456 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 4142456 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 4142456 ']' 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.816 16:49:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:38.816 [2024-07-15 16:49:45.411183] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:07:38.816 [2024-07-15 16:49:45.411235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.817 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.817 [2024-07-15 16:49:45.469111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.076 [2024-07-15 16:49:45.548650] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.076 [2024-07-15 16:49:45.548688] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.076 [2024-07-15 16:49:45.548695] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.076 [2024-07-15 16:49:45.548701] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.076 [2024-07-15 16:49:45.548710] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.076 [2024-07-15 16:49:45.548779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.076 [2024-07-15 16:49:45.548796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.076 [2024-07-15 16:49:45.548903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.076 [2024-07-15 16:49:45.548904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.643 16:49:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.643 16:49:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:39.643 16:49:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:39.643 16:49:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.643 16:49:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:39.643 16:49:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.643 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:39.643 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:39.643 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:39.902 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:39.902 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:39.902 "nvmf_tgt_1" 00:07:39.902 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:39.902 "nvmf_tgt_2" 00:07:39.902 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:39.902 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:40.161 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:40.161 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:40.161 true 00:07:40.161 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:40.420 true 00:07:40.420 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:40.420 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:40.420 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:40.420 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:40.420 16:49:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:40.420 16:49:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:40.420 16:49:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:40.420 16:49:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:40.420 16:49:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:40.420 16:49:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:40.420 16:49:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:40.420 rmmod nvme_tcp 00:07:40.420 rmmod nvme_fabrics 00:07:40.420 rmmod nvme_keyring 00:07:40.420 16:49:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:40.420 16:49:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:40.420 16:49:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:40.420 16:49:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 4142456 ']' 00:07:40.420 16:49:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 4142456 00:07:40.420 16:49:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 4142456 ']' 00:07:40.420 16:49:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 4142456 00:07:40.420 16:49:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:40.420 16:49:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.420 16:49:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4142456 00:07:40.421 16:49:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:40.421 16:49:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:40.421 16:49:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4142456' 00:07:40.421 killing process with pid 4142456 00:07:40.421 16:49:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 4142456 00:07:40.421 16:49:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 4142456 00:07:40.678 16:49:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:40.678 16:49:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:40.678 16:49:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:40.678 16:49:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:40.678 16:49:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:40.678 16:49:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.678 16:49:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.678 16:49:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.295 16:49:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:43.295 00:07:43.295 real 0m9.469s 00:07:43.295 user 0m8.923s 00:07:43.295 sys 0m4.552s 00:07:43.295 16:49:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.295 16:49:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:43.295 ************************************ 00:07:43.295 END TEST nvmf_multitarget 00:07:43.295 ************************************ 00:07:43.295 16:49:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:43.295 16:49:49 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:43.295 16:49:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.295 16:49:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.295 16:49:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.295 ************************************ 00:07:43.295 START TEST nvmf_rpc 00:07:43.295 ************************************ 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:43.295 * Looking for test storage... 00:07:43.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.295 16:49:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:48.567 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:48.567 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.567 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:48.568 Found net devices under 0000:86:00.0: cvl_0_0 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:48.568 Found net devices under 0000:86:00.1: cvl_0_1 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:48.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:07:48.568 00:07:48.568 --- 10.0.0.2 ping statistics --- 00:07:48.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.568 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:07:48.568 00:07:48.568 --- 10.0.0.1 ping statistics --- 00:07:48.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.568 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=4146239 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 4146239 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 4146239 ']' 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.568 16:49:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.568 [2024-07-15 16:49:54.881710] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:07:48.568 [2024-07-15 16:49:54.881755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.568 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.568 [2024-07-15 16:49:54.939443] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.568 [2024-07-15 16:49:55.020438] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.568 [2024-07-15 16:49:55.020472] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.568 [2024-07-15 16:49:55.020479] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.568 [2024-07-15 16:49:55.020488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.568 [2024-07-15 16:49:55.020493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.568 [2024-07-15 16:49:55.020538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.568 [2024-07-15 16:49:55.020633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.568 [2024-07-15 16:49:55.020717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.568 [2024-07-15 16:49:55.020718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:49.135 "tick_rate": 2300000000, 00:07:49.135 "poll_groups": [ 00:07:49.135 { 00:07:49.135 "name": "nvmf_tgt_poll_group_000", 00:07:49.135 "admin_qpairs": 0, 00:07:49.135 "io_qpairs": 0, 00:07:49.135 "current_admin_qpairs": 0, 00:07:49.135 "current_io_qpairs": 0, 00:07:49.135 "pending_bdev_io": 0, 00:07:49.135 "completed_nvme_io": 0, 00:07:49.135 "transports": [] 00:07:49.135 }, 00:07:49.135 { 00:07:49.135 "name": "nvmf_tgt_poll_group_001", 00:07:49.135 "admin_qpairs": 0, 00:07:49.135 "io_qpairs": 0, 00:07:49.135 "current_admin_qpairs": 0, 00:07:49.135 "current_io_qpairs": 0, 00:07:49.135 "pending_bdev_io": 0, 00:07:49.135 "completed_nvme_io": 0, 00:07:49.135 "transports": [] 00:07:49.135 }, 00:07:49.135 { 00:07:49.135 "name": "nvmf_tgt_poll_group_002", 00:07:49.135 "admin_qpairs": 0, 00:07:49.135 "io_qpairs": 0, 00:07:49.135 "current_admin_qpairs": 0, 00:07:49.135 "current_io_qpairs": 0, 00:07:49.135 "pending_bdev_io": 0, 00:07:49.135 "completed_nvme_io": 0, 00:07:49.135 "transports": [] 00:07:49.135 }, 00:07:49.135 { 00:07:49.135 "name": "nvmf_tgt_poll_group_003", 00:07:49.135 "admin_qpairs": 0, 00:07:49.135 "io_qpairs": 0, 00:07:49.135 "current_admin_qpairs": 0, 00:07:49.135 "current_io_qpairs": 0, 00:07:49.135 "pending_bdev_io": 0, 00:07:49.135 "completed_nvme_io": 0, 00:07:49.135 "transports": [] 00:07:49.135 } 00:07:49.135 ] 00:07:49.135 }' 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:49.135 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.395 [2024-07-15 16:49:55.843697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:49.395 "tick_rate": 2300000000, 00:07:49.395 "poll_groups": [ 00:07:49.395 { 00:07:49.395 "name": "nvmf_tgt_poll_group_000", 00:07:49.395 "admin_qpairs": 0, 00:07:49.395 "io_qpairs": 0, 00:07:49.395 "current_admin_qpairs": 0, 00:07:49.395 "current_io_qpairs": 0, 00:07:49.395 "pending_bdev_io": 0, 00:07:49.395 "completed_nvme_io": 0, 00:07:49.395 "transports": [ 00:07:49.395 { 00:07:49.395 "trtype": "TCP" 00:07:49.395 } 00:07:49.395 ] 00:07:49.395 }, 00:07:49.395 { 00:07:49.395 "name": "nvmf_tgt_poll_group_001", 00:07:49.395 "admin_qpairs": 0, 00:07:49.395 "io_qpairs": 0, 00:07:49.395 "current_admin_qpairs": 0, 00:07:49.395 "current_io_qpairs": 0, 00:07:49.395 "pending_bdev_io": 0, 00:07:49.395 "completed_nvme_io": 0, 00:07:49.395 "transports": [ 00:07:49.395 { 00:07:49.395 "trtype": "TCP" 00:07:49.395 } 00:07:49.395 ] 00:07:49.395 }, 00:07:49.395 { 00:07:49.395 "name": "nvmf_tgt_poll_group_002", 00:07:49.395 "admin_qpairs": 0, 00:07:49.395 "io_qpairs": 0, 00:07:49.395 "current_admin_qpairs": 0, 00:07:49.395 "current_io_qpairs": 0, 00:07:49.395 "pending_bdev_io": 0, 00:07:49.395 "completed_nvme_io": 0, 00:07:49.395 "transports": [ 00:07:49.395 { 00:07:49.395 "trtype": "TCP" 00:07:49.395 } 00:07:49.395 ] 00:07:49.395 }, 00:07:49.395 { 00:07:49.395 "name": "nvmf_tgt_poll_group_003", 00:07:49.395 "admin_qpairs": 0, 00:07:49.395 "io_qpairs": 0, 00:07:49.395 "current_admin_qpairs": 0, 00:07:49.395 "current_io_qpairs": 0, 00:07:49.395 "pending_bdev_io": 0, 00:07:49.395 "completed_nvme_io": 0, 00:07:49.395 "transports": [ 00:07:49.395 { 00:07:49.395 "trtype": "TCP" 00:07:49.395 } 00:07:49.395 ] 00:07:49.395 } 00:07:49.395 ] 00:07:49.395 }' 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.395 Malloc1 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.395 16:49:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.395 [2024-07-15 16:49:56.011753] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:49.395 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:49.395 [2024-07-15 16:49:56.040215] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:07:49.654 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:49.654 could not add new controller: failed to write to nvme-fabrics device 00:07:49.654 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:49.654 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:49.654 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:49.654 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:49.654 16:49:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:49.654 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.654 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.654 16:49:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.654 16:49:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:50.590 16:49:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.590 16:49:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:50.590 16:49:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.590 16:49:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:50.590 16:49:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:53.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.122 [2024-07-15 16:49:59.348076] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:07:53.122 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:53.122 could not add new controller: failed to write to nvme-fabrics device 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.122 16:49:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:54.058 16:50:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:54.058 16:50:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:54.058 16:50:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:54.058 16:50:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:54.058 16:50:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:55.960 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:55.960 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:55.960 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:55.960 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:55.961 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:55.961 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:55.961 16:50:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:56.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.220 [2024-07-15 16:50:02.700727] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.220 16:50:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.597 16:50:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:57.597 16:50:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:57.597 16:50:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:57.597 16:50:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:57.597 16:50:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:59.497 16:50:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:59.497 16:50:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:59.497 16:50:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:59.497 16:50:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:59.497 16:50:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:59.497 16:50:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:59.497 16:50:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:59.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.497 16:50:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:59.497 16:50:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:59.497 16:50:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:59.497 16:50:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.497 16:50:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:59.497 16:50:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.497 [2024-07-15 16:50:06.044217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.497 16:50:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:00.874 16:50:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:00.874 16:50:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:00.874 16:50:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:00.874 16:50:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:00.874 16:50:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.794 [2024-07-15 16:50:09.300202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.794 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.795 16:50:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.795 16:50:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:04.181 16:50:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:04.181 16:50:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:04.181 16:50:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:04.181 16:50:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:04.181 16:50:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:06.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.145 [2024-07-15 16:50:12.667181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.145 16:50:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:07.530 16:50:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:07.530 16:50:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:07.530 16:50:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:07.530 16:50:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:07.530 16:50:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:09.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.434 [2024-07-15 16:50:15.996565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.434 16:50:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:09.434 16:50:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.434 16:50:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.434 16:50:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.434 16:50:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:09.434 16:50:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.434 16:50:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.434 16:50:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.434 16:50:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.810 16:50:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:10.810 16:50:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:10.811 16:50:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:10.811 16:50:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:10.811 16:50:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:12.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.715 [2024-07-15 16:50:19.346703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:12.715 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.974 [2024-07-15 16:50:19.394825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.974 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 [2024-07-15 16:50:19.446978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 [2024-07-15 16:50:19.495157] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 [2024-07-15 16:50:19.543329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:12.975 "tick_rate": 2300000000, 00:08:12.975 "poll_groups": [ 00:08:12.975 { 00:08:12.975 "name": "nvmf_tgt_poll_group_000", 00:08:12.975 "admin_qpairs": 2, 00:08:12.975 "io_qpairs": 168, 00:08:12.975 "current_admin_qpairs": 0, 00:08:12.975 "current_io_qpairs": 0, 00:08:12.975 "pending_bdev_io": 0, 00:08:12.975 "completed_nvme_io": 267, 00:08:12.975 "transports": [ 00:08:12.975 { 00:08:12.975 "trtype": "TCP" 00:08:12.975 } 00:08:12.975 ] 00:08:12.975 }, 00:08:12.975 { 00:08:12.975 "name": "nvmf_tgt_poll_group_001", 00:08:12.975 "admin_qpairs": 2, 00:08:12.975 "io_qpairs": 168, 00:08:12.975 "current_admin_qpairs": 0, 00:08:12.975 "current_io_qpairs": 0, 00:08:12.975 "pending_bdev_io": 0, 00:08:12.975 "completed_nvme_io": 219, 00:08:12.975 "transports": [ 00:08:12.975 { 00:08:12.975 "trtype": "TCP" 00:08:12.975 } 00:08:12.975 ] 00:08:12.975 }, 00:08:12.975 { 00:08:12.975 "name": "nvmf_tgt_poll_group_002", 00:08:12.975 "admin_qpairs": 1, 00:08:12.975 "io_qpairs": 168, 00:08:12.975 "current_admin_qpairs": 0, 00:08:12.975 "current_io_qpairs": 0, 00:08:12.975 "pending_bdev_io": 0, 00:08:12.975 "completed_nvme_io": 219, 00:08:12.975 "transports": [ 00:08:12.975 { 00:08:12.975 "trtype": "TCP" 00:08:12.975 } 00:08:12.975 ] 00:08:12.975 }, 00:08:12.975 { 00:08:12.975 "name": "nvmf_tgt_poll_group_003", 00:08:12.975 "admin_qpairs": 2, 00:08:12.975 "io_qpairs": 168, 00:08:12.975 "current_admin_qpairs": 0, 00:08:12.975 "current_io_qpairs": 0, 00:08:12.975 "pending_bdev_io": 0, 00:08:12.975 "completed_nvme_io": 317, 00:08:12.975 "transports": [ 00:08:12.975 { 00:08:12.975 "trtype": "TCP" 00:08:12.975 } 00:08:12.975 ] 00:08:12.975 } 00:08:12.975 ] 00:08:12.975 }' 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:12.975 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.235 rmmod nvme_tcp 00:08:13.235 rmmod nvme_fabrics 00:08:13.235 rmmod nvme_keyring 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 4146239 ']' 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 4146239 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 4146239 ']' 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 4146239 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4146239 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4146239' 00:08:13.235 killing process with pid 4146239 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 4146239 00:08:13.235 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 4146239 00:08:13.494 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:13.494 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:13.494 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:13.494 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:13.494 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:13.494 16:50:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.494 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.494 16:50:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.398 16:50:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:15.398 00:08:15.398 real 0m32.695s 00:08:15.398 user 1m41.227s 00:08:15.398 sys 0m5.760s 00:08:15.398 16:50:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.398 16:50:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.398 ************************************ 00:08:15.398 END TEST nvmf_rpc 00:08:15.398 ************************************ 00:08:15.656 16:50:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:15.656 16:50:22 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:15.656 16:50:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:15.656 16:50:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.656 16:50:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.656 ************************************ 00:08:15.656 START TEST nvmf_invalid 00:08:15.656 ************************************ 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:15.656 * Looking for test storage... 00:08:15.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:15.656 16:50:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:20.923 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:20.923 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.923 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:20.924 Found net devices under 0000:86:00.0: cvl_0_0 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:20.924 Found net devices under 0000:86:00.1: cvl_0_1 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:20.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:08:20.924 00:08:20.924 --- 10.0.0.2 ping statistics --- 00:08:20.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.924 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:20.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:08:20.924 00:08:20.924 --- 10.0.0.1 ping statistics --- 00:08:20.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.924 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=4153879 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 4153879 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 4153879 ']' 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.924 16:50:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:20.924 [2024-07-15 16:50:27.333654] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:08:20.924 [2024-07-15 16:50:27.333696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.924 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.924 [2024-07-15 16:50:27.390475] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.924 [2024-07-15 16:50:27.474431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.924 [2024-07-15 16:50:27.474468] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.924 [2024-07-15 16:50:27.474475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.924 [2024-07-15 16:50:27.474481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.924 [2024-07-15 16:50:27.474486] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.924 [2024-07-15 16:50:27.474530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.924 [2024-07-15 16:50:27.474628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.924 [2024-07-15 16:50:27.474646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.924 [2024-07-15 16:50:27.474647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.492 16:50:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.492 16:50:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:21.492 16:50:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:21.492 16:50:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:21.492 16:50:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:21.751 16:50:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.751 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:21.751 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23080 00:08:21.751 [2024-07-15 16:50:28.340659] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:21.751 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:21.751 { 00:08:21.751 "nqn": "nqn.2016-06.io.spdk:cnode23080", 00:08:21.751 "tgt_name": "foobar", 00:08:21.751 "method": "nvmf_create_subsystem", 00:08:21.751 "req_id": 1 00:08:21.751 } 00:08:21.751 Got JSON-RPC error response 00:08:21.751 response: 00:08:21.751 { 00:08:21.751 "code": -32603, 00:08:21.751 "message": "Unable to find target foobar" 00:08:21.751 }' 00:08:21.751 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:21.751 { 00:08:21.751 "nqn": "nqn.2016-06.io.spdk:cnode23080", 00:08:21.751 "tgt_name": "foobar", 00:08:21.751 "method": "nvmf_create_subsystem", 00:08:21.751 "req_id": 1 00:08:21.751 } 00:08:21.751 Got JSON-RPC error response 00:08:21.751 response: 00:08:21.751 { 00:08:21.751 "code": -32603, 00:08:21.751 "message": "Unable to find target foobar" 00:08:21.751 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:21.751 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:21.751 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19443 00:08:22.010 [2024-07-15 16:50:28.529340] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19443: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:22.010 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:22.010 { 00:08:22.010 "nqn": "nqn.2016-06.io.spdk:cnode19443", 00:08:22.010 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:22.010 "method": "nvmf_create_subsystem", 00:08:22.010 "req_id": 1 00:08:22.010 } 00:08:22.010 Got JSON-RPC error response 00:08:22.010 response: 00:08:22.010 { 00:08:22.010 "code": -32602, 00:08:22.010 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:22.010 }' 00:08:22.010 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:22.010 { 00:08:22.010 "nqn": "nqn.2016-06.io.spdk:cnode19443", 00:08:22.010 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:22.010 "method": "nvmf_create_subsystem", 00:08:22.010 "req_id": 1 00:08:22.010 } 00:08:22.010 Got JSON-RPC error response 00:08:22.010 response: 00:08:22.010 { 00:08:22.010 "code": -32602, 00:08:22.010 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:22.010 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:22.010 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:22.010 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13816 00:08:22.269 [2024-07-15 16:50:28.725976] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13816: invalid model number 'SPDK_Controller' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:22.269 { 00:08:22.269 "nqn": "nqn.2016-06.io.spdk:cnode13816", 00:08:22.269 "model_number": "SPDK_Controller\u001f", 00:08:22.269 "method": "nvmf_create_subsystem", 00:08:22.269 "req_id": 1 00:08:22.269 } 00:08:22.269 Got JSON-RPC error response 00:08:22.269 response: 00:08:22.269 { 00:08:22.269 "code": -32602, 00:08:22.269 "message": "Invalid MN SPDK_Controller\u001f" 00:08:22.269 }' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:22.269 { 00:08:22.269 "nqn": "nqn.2016-06.io.spdk:cnode13816", 00:08:22.269 "model_number": "SPDK_Controller\u001f", 00:08:22.269 "method": "nvmf_create_subsystem", 00:08:22.269 "req_id": 1 00:08:22.269 } 00:08:22.269 Got JSON-RPC error response 00:08:22.269 response: 00:08:22.269 { 00:08:22.269 "code": -32602, 00:08:22.269 "message": "Invalid MN SPDK_Controller\u001f" 00:08:22.269 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.269 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '^x?m{'\''MU_sf[1%D"FLJ|' 00:08:22.270 16:50:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '^x?m{'\''MU_sf[1%D"FLJ|' nqn.2016-06.io.spdk:cnode9874 00:08:22.529 [2024-07-15 16:50:29.051035] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9874: invalid serial number '^x?m{'MU_sf[1%D"FLJ|' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:22.529 { 00:08:22.529 "nqn": "nqn.2016-06.io.spdk:cnode9874", 00:08:22.529 "serial_number": "^x?m\u007f{'\''MU_sf[1%D\"FLJ|", 00:08:22.529 "method": "nvmf_create_subsystem", 00:08:22.529 "req_id": 1 00:08:22.529 } 00:08:22.529 Got JSON-RPC error response 00:08:22.529 response: 00:08:22.529 { 00:08:22.529 "code": -32602, 00:08:22.529 "message": "Invalid SN ^x?m\u007f{'\''MU_sf[1%D\"FLJ|" 00:08:22.529 }' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:22.529 { 00:08:22.529 "nqn": "nqn.2016-06.io.spdk:cnode9874", 00:08:22.529 "serial_number": "^x?m\u007f{'MU_sf[1%D\"FLJ|", 00:08:22.529 "method": "nvmf_create_subsystem", 00:08:22.529 "req_id": 1 00:08:22.529 } 00:08:22.529 Got JSON-RPC error response 00:08:22.529 response: 00:08:22.529 { 00:08:22.529 "code": -32602, 00:08:22.529 "message": "Invalid SN ^x?m\u007f{'MU_sf[1%D\"FLJ|" 00:08:22.529 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.529 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:22.788 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '>Ebhoz$CQ-|!!`a8B'\''C2j9H.#cG`V~shub(".\yb2' 00:08:22.789 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '>Ebhoz$CQ-|!!`a8B'\''C2j9H.#cG`V~shub(".\yb2' nqn.2016-06.io.spdk:cnode11336 00:08:23.048 [2024-07-15 16:50:29.492489] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11336: invalid model number '>Ebhoz$CQ-|!!`a8B'C2j9H.#cG`V~shub(".\yb2' 00:08:23.048 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:23.048 { 00:08:23.048 "nqn": "nqn.2016-06.io.spdk:cnode11336", 00:08:23.048 "model_number": ">Ebhoz$CQ-|!!`a8B'\''C2j9H.#cG`V~shub(\".\\yb2", 00:08:23.048 "method": "nvmf_create_subsystem", 00:08:23.048 "req_id": 1 00:08:23.048 } 00:08:23.048 Got JSON-RPC error response 00:08:23.048 response: 00:08:23.048 { 00:08:23.048 "code": -32602, 00:08:23.048 "message": "Invalid MN >Ebhoz$CQ-|!!`a8B'\''C2j9H.#cG`V~shub(\".\\yb2" 00:08:23.048 }' 00:08:23.048 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:23.048 { 00:08:23.048 "nqn": "nqn.2016-06.io.spdk:cnode11336", 00:08:23.048 "model_number": ">Ebhoz$CQ-|!!`a8B'C2j9H.#cG`V~shub(\".\\yb2", 00:08:23.048 "method": "nvmf_create_subsystem", 00:08:23.048 "req_id": 1 00:08:23.048 } 00:08:23.048 Got JSON-RPC error response 00:08:23.048 response: 00:08:23.048 { 00:08:23.048 "code": -32602, 00:08:23.048 "message": "Invalid MN >Ebhoz$CQ-|!!`a8B'C2j9H.#cG`V~shub(\".\\yb2" 00:08:23.048 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:23.048 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:23.048 [2024-07-15 16:50:29.677189] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.048 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:23.306 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:23.306 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:23.306 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:23.306 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:23.306 16:50:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:23.564 [2024-07-15 16:50:30.055881] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:23.564 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:23.564 { 00:08:23.564 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:23.564 "listen_address": { 00:08:23.564 "trtype": "tcp", 00:08:23.564 "traddr": "", 00:08:23.564 "trsvcid": "4421" 00:08:23.564 }, 00:08:23.564 "method": "nvmf_subsystem_remove_listener", 00:08:23.564 "req_id": 1 00:08:23.564 } 00:08:23.564 Got JSON-RPC error response 00:08:23.564 response: 00:08:23.564 { 00:08:23.564 "code": -32602, 00:08:23.564 "message": "Invalid parameters" 00:08:23.564 }' 00:08:23.564 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:23.564 { 00:08:23.564 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:23.564 "listen_address": { 00:08:23.564 "trtype": "tcp", 00:08:23.564 "traddr": "", 00:08:23.564 "trsvcid": "4421" 00:08:23.564 }, 00:08:23.564 "method": "nvmf_subsystem_remove_listener", 00:08:23.564 "req_id": 1 00:08:23.564 } 00:08:23.564 Got JSON-RPC error response 00:08:23.564 response: 00:08:23.564 { 00:08:23.564 "code": -32602, 00:08:23.564 "message": "Invalid parameters" 00:08:23.564 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:23.564 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode602 -i 0 00:08:23.823 [2024-07-15 16:50:30.248485] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode602: invalid cntlid range [0-65519] 00:08:23.823 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:23.823 { 00:08:23.823 "nqn": "nqn.2016-06.io.spdk:cnode602", 00:08:23.823 "min_cntlid": 0, 00:08:23.823 "method": "nvmf_create_subsystem", 00:08:23.823 "req_id": 1 00:08:23.823 } 00:08:23.823 Got JSON-RPC error response 00:08:23.823 response: 00:08:23.823 { 00:08:23.823 "code": -32602, 00:08:23.823 "message": "Invalid cntlid range [0-65519]" 00:08:23.823 }' 00:08:23.823 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:23.823 { 00:08:23.823 "nqn": "nqn.2016-06.io.spdk:cnode602", 00:08:23.823 "min_cntlid": 0, 00:08:23.823 "method": "nvmf_create_subsystem", 00:08:23.823 "req_id": 1 00:08:23.823 } 00:08:23.823 Got JSON-RPC error response 00:08:23.823 response: 00:08:23.823 { 00:08:23.823 "code": -32602, 00:08:23.823 "message": "Invalid cntlid range [0-65519]" 00:08:23.823 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:23.823 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12461 -i 65520 00:08:23.823 [2024-07-15 16:50:30.437138] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12461: invalid cntlid range [65520-65519] 00:08:23.823 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:23.823 { 00:08:23.823 "nqn": "nqn.2016-06.io.spdk:cnode12461", 00:08:23.823 "min_cntlid": 65520, 00:08:23.823 "method": "nvmf_create_subsystem", 00:08:23.823 "req_id": 1 00:08:23.823 } 00:08:23.823 Got JSON-RPC error response 00:08:23.823 response: 00:08:23.823 { 00:08:23.823 "code": -32602, 00:08:23.823 "message": "Invalid cntlid range [65520-65519]" 00:08:23.823 }' 00:08:23.823 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:23.823 { 00:08:23.823 "nqn": "nqn.2016-06.io.spdk:cnode12461", 00:08:23.823 "min_cntlid": 65520, 00:08:23.823 "method": "nvmf_create_subsystem", 00:08:23.823 "req_id": 1 00:08:23.823 } 00:08:23.823 Got JSON-RPC error response 00:08:23.823 response: 00:08:23.823 { 00:08:23.823 "code": -32602, 00:08:23.823 "message": "Invalid cntlid range [65520-65519]" 00:08:23.823 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:23.823 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23588 -I 0 00:08:24.086 [2024-07-15 16:50:30.625791] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23588: invalid cntlid range [1-0] 00:08:24.086 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:24.086 { 00:08:24.086 "nqn": "nqn.2016-06.io.spdk:cnode23588", 00:08:24.086 "max_cntlid": 0, 00:08:24.086 "method": "nvmf_create_subsystem", 00:08:24.086 "req_id": 1 00:08:24.086 } 00:08:24.086 Got JSON-RPC error response 00:08:24.086 response: 00:08:24.086 { 00:08:24.086 "code": -32602, 00:08:24.086 "message": "Invalid cntlid range [1-0]" 00:08:24.086 }' 00:08:24.086 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:24.086 { 00:08:24.086 "nqn": "nqn.2016-06.io.spdk:cnode23588", 00:08:24.086 "max_cntlid": 0, 00:08:24.086 "method": "nvmf_create_subsystem", 00:08:24.086 "req_id": 1 00:08:24.086 } 00:08:24.086 Got JSON-RPC error response 00:08:24.086 response: 00:08:24.086 { 00:08:24.086 "code": -32602, 00:08:24.086 "message": "Invalid cntlid range [1-0]" 00:08:24.086 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:24.086 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5996 -I 65520 00:08:24.379 [2024-07-15 16:50:30.814409] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5996: invalid cntlid range [1-65520] 00:08:24.379 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:24.379 { 00:08:24.379 "nqn": "nqn.2016-06.io.spdk:cnode5996", 00:08:24.379 "max_cntlid": 65520, 00:08:24.379 "method": "nvmf_create_subsystem", 00:08:24.379 "req_id": 1 00:08:24.379 } 00:08:24.379 Got JSON-RPC error response 00:08:24.379 response: 00:08:24.379 { 00:08:24.379 "code": -32602, 00:08:24.379 "message": "Invalid cntlid range [1-65520]" 00:08:24.379 }' 00:08:24.379 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:24.379 { 00:08:24.379 "nqn": "nqn.2016-06.io.spdk:cnode5996", 00:08:24.379 "max_cntlid": 65520, 00:08:24.379 "method": "nvmf_create_subsystem", 00:08:24.379 "req_id": 1 00:08:24.379 } 00:08:24.379 Got JSON-RPC error response 00:08:24.379 response: 00:08:24.379 { 00:08:24.379 "code": -32602, 00:08:24.379 "message": "Invalid cntlid range [1-65520]" 00:08:24.379 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:24.379 16:50:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8999 -i 6 -I 5 00:08:24.379 [2024-07-15 16:50:30.986994] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8999: invalid cntlid range [6-5] 00:08:24.379 16:50:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:24.379 { 00:08:24.379 "nqn": "nqn.2016-06.io.spdk:cnode8999", 00:08:24.379 "min_cntlid": 6, 00:08:24.379 "max_cntlid": 5, 00:08:24.379 "method": "nvmf_create_subsystem", 00:08:24.379 "req_id": 1 00:08:24.379 } 00:08:24.379 Got JSON-RPC error response 00:08:24.379 response: 00:08:24.379 { 00:08:24.379 "code": -32602, 00:08:24.379 "message": "Invalid cntlid range [6-5]" 00:08:24.379 }' 00:08:24.379 16:50:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:24.379 { 00:08:24.379 "nqn": "nqn.2016-06.io.spdk:cnode8999", 00:08:24.379 "min_cntlid": 6, 00:08:24.379 "max_cntlid": 5, 00:08:24.379 "method": "nvmf_create_subsystem", 00:08:24.379 "req_id": 1 00:08:24.379 } 00:08:24.379 Got JSON-RPC error response 00:08:24.379 response: 00:08:24.379 { 00:08:24.379 "code": -32602, 00:08:24.379 "message": "Invalid cntlid range [6-5]" 00:08:24.379 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:24.379 16:50:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:24.638 { 00:08:24.638 "name": "foobar", 00:08:24.638 "method": "nvmf_delete_target", 00:08:24.638 "req_id": 1 00:08:24.638 } 00:08:24.638 Got JSON-RPC error response 00:08:24.638 response: 00:08:24.638 { 00:08:24.638 "code": -32602, 00:08:24.638 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:24.638 }' 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:24.638 { 00:08:24.638 "name": "foobar", 00:08:24.638 "method": "nvmf_delete_target", 00:08:24.638 "req_id": 1 00:08:24.638 } 00:08:24.638 Got JSON-RPC error response 00:08:24.638 response: 00:08:24.638 { 00:08:24.638 "code": -32602, 00:08:24.638 "message": "The specified target doesn't exist, cannot delete it." 00:08:24.638 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:24.638 rmmod nvme_tcp 00:08:24.638 rmmod nvme_fabrics 00:08:24.638 rmmod nvme_keyring 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 4153879 ']' 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 4153879 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 4153879 ']' 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 4153879 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4153879 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4153879' 00:08:24.638 killing process with pid 4153879 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 4153879 00:08:24.638 16:50:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 4153879 00:08:24.897 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:24.897 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:24.897 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:24.897 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:24.897 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:24.897 16:50:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.897 16:50:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.897 16:50:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.430 16:50:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:27.430 00:08:27.430 real 0m11.367s 00:08:27.430 user 0m19.385s 00:08:27.430 sys 0m4.776s 00:08:27.430 16:50:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.430 16:50:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:27.430 ************************************ 00:08:27.430 END TEST nvmf_invalid 00:08:27.430 ************************************ 00:08:27.430 16:50:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:27.430 16:50:33 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:27.430 16:50:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:27.430 16:50:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.430 16:50:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.430 ************************************ 00:08:27.430 START TEST nvmf_abort 00:08:27.430 ************************************ 00:08:27.430 16:50:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:27.430 * Looking for test storage... 00:08:27.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.430 16:50:33 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.430 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:27.430 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.430 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.430 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.430 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.430 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.430 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:27.431 16:50:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:32.699 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:32.700 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:32.700 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:32.700 Found net devices under 0000:86:00.0: cvl_0_0 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:32.700 Found net devices under 0000:86:00.1: cvl_0_1 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:32.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:08:32.700 00:08:32.700 --- 10.0.0.2 ping statistics --- 00:08:32.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.700 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:08:32.700 00:08:32.700 --- 10.0.0.1 ping statistics --- 00:08:32.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.700 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.700 16:50:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:32.700 16:50:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=4158222 00:08:32.700 16:50:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 4158222 00:08:32.700 16:50:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:32.700 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 4158222 ']' 00:08:32.700 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.700 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.700 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.700 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.700 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:32.700 [2024-07-15 16:50:39.054298] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:08:32.700 [2024-07-15 16:50:39.054340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.700 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.700 [2024-07-15 16:50:39.109600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.700 [2024-07-15 16:50:39.189061] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.700 [2024-07-15 16:50:39.189095] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.700 [2024-07-15 16:50:39.189102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.700 [2024-07-15 16:50:39.189108] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.700 [2024-07-15 16:50:39.189113] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.700 [2024-07-15 16:50:39.189208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.700 [2024-07-15 16:50:39.189291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.701 [2024-07-15 16:50:39.189293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.267 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.267 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:33.267 16:50:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.267 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.267 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.267 16:50:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.267 16:50:39 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:33.267 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.267 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.267 [2024-07-15 16:50:39.914100] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.267 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.267 16:50:39 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:33.267 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.267 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.525 Malloc0 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.525 Delay0 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.525 [2024-07-15 16:50:39.980611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.525 16:50:39 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:33.525 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.525 [2024-07-15 16:50:40.050820] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:35.427 Initializing NVMe Controllers 00:08:35.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:35.427 controller IO queue size 128 less than required 00:08:35.427 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:35.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:35.427 Initialization complete. Launching workers. 00:08:35.427 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 43043 00:08:35.427 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43104, failed to submit 62 00:08:35.427 success 43047, unsuccess 57, failed 0 00:08:35.427 16:50:42 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:35.427 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.427 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.687 rmmod nvme_tcp 00:08:35.687 rmmod nvme_fabrics 00:08:35.687 rmmod nvme_keyring 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 4158222 ']' 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 4158222 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 4158222 ']' 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 4158222 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4158222 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4158222' 00:08:35.687 killing process with pid 4158222 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 4158222 00:08:35.687 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 4158222 00:08:35.946 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.946 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.946 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.946 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.946 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.946 16:50:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.946 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.946 16:50:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.850 16:50:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:37.850 00:08:37.850 real 0m10.912s 00:08:37.850 user 0m12.838s 00:08:37.850 sys 0m4.893s 00:08:37.850 16:50:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.850 16:50:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.850 ************************************ 00:08:37.850 END TEST nvmf_abort 00:08:37.850 ************************************ 00:08:37.850 16:50:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:37.850 16:50:44 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:37.850 16:50:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:37.850 16:50:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.850 16:50:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.109 ************************************ 00:08:38.109 START TEST nvmf_ns_hotplug_stress 00:08:38.109 ************************************ 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:38.109 * Looking for test storage... 00:08:38.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.109 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.110 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.110 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.110 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.110 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.110 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.110 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.110 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.110 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.110 16:50:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:43.383 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.383 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:43.384 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:43.384 Found net devices under 0000:86:00.0: cvl_0_0 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:43.384 Found net devices under 0000:86:00.1: cvl_0_1 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:43.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:08:43.384 00:08:43.384 --- 10.0.0.2 ping statistics --- 00:08:43.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.384 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:08:43.384 00:08:43.384 --- 10.0.0.1 ping statistics --- 00:08:43.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.384 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=4162237 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 4162237 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 4162237 ']' 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.384 16:50:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.384 [2024-07-15 16:50:49.960729] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:08:43.384 [2024-07-15 16:50:49.960776] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.384 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.384 [2024-07-15 16:50:50.016635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:43.643 [2024-07-15 16:50:50.110755] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.643 [2024-07-15 16:50:50.110788] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.643 [2024-07-15 16:50:50.110796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.643 [2024-07-15 16:50:50.110802] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.643 [2024-07-15 16:50:50.110808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.643 [2024-07-15 16:50:50.110843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.643 [2024-07-15 16:50:50.110945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.643 [2024-07-15 16:50:50.110947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.210 16:50:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.210 16:50:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:44.210 16:50:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:44.210 16:50:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:44.210 16:50:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.210 16:50:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.210 16:50:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:44.210 16:50:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:44.468 [2024-07-15 16:50:50.983223] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.468 16:50:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:44.727 16:50:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.727 [2024-07-15 16:50:51.356563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.727 16:50:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.987 16:50:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:45.246 Malloc0 00:08:45.246 16:50:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:45.504 Delay0 00:08:45.504 16:50:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.504 16:50:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:45.763 NULL1 00:08:45.763 16:50:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:46.021 16:50:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4162690 00:08:46.021 16:50:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:46.021 16:50:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:46.021 16:50:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.021 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.021 16:50:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.280 16:50:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:46.280 16:50:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:46.568 true 00:08:46.568 16:50:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:46.568 16:50:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.568 16:50:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.825 16:50:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:46.825 16:50:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:47.083 true 00:08:47.083 16:50:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:47.083 16:50:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.463 Read completed with error (sct=0, sc=11) 00:08:48.463 16:50:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.463 16:50:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:48.463 16:50:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:48.721 true 00:08:48.721 16:50:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:48.721 16:50:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.656 16:50:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.656 16:50:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:49.656 16:50:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:49.656 true 00:08:49.913 16:50:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:49.913 16:50:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.913 16:50:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.171 16:50:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:50.171 16:50:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:50.429 true 00:08:50.429 16:50:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:50.429 16:50:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.801 16:50:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.801 16:50:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:51.801 16:50:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:51.801 true 00:08:52.059 16:50:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:52.059 16:50:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.992 16:50:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.992 16:50:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:52.992 16:50:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:52.992 true 00:08:53.250 16:50:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:53.250 16:50:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.250 16:50:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.509 16:51:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:53.509 16:51:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:53.767 true 00:08:53.767 16:51:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:53.767 16:51:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.715 16:51:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.974 16:51:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:54.974 16:51:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:55.231 true 00:08:55.231 16:51:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:55.231 16:51:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.169 16:51:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.169 16:51:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:56.169 16:51:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:56.434 true 00:08:56.434 16:51:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:56.434 16:51:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.705 16:51:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.705 16:51:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:56.705 16:51:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:56.964 true 00:08:56.964 16:51:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:56.964 16:51:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.339 16:51:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.339 16:51:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:58.339 16:51:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:58.598 true 00:08:58.598 16:51:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:58.598 16:51:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.543 16:51:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.543 16:51:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:59.543 16:51:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:59.802 true 00:08:59.802 16:51:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:08:59.802 16:51:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.802 16:51:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.061 16:51:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:00.061 16:51:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:00.319 true 00:09:00.319 16:51:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:00.320 16:51:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.255 16:51:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.513 16:51:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:01.513 16:51:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:01.772 true 00:09:01.772 16:51:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:01.772 16:51:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.735 16:51:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.735 16:51:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:02.735 16:51:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:02.993 true 00:09:02.993 16:51:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:02.993 16:51:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.252 16:51:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.252 16:51:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:03.252 16:51:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:03.510 true 00:09:03.510 16:51:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:03.510 16:51:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.888 16:51:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.888 16:51:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:04.888 16:51:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:04.888 true 00:09:05.256 16:51:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:05.256 16:51:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.823 16:51:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.082 16:51:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:06.082 16:51:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:06.082 true 00:09:06.341 16:51:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:06.341 16:51:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.341 16:51:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.600 16:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:06.600 16:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:06.859 true 00:09:06.859 16:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:06.859 16:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.859 16:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.117 16:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:07.117 16:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:07.376 true 00:09:07.376 16:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:07.376 16:51:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.633 16:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.633 16:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:07.633 16:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:07.891 true 00:09:07.891 16:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:07.891 16:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.269 16:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.269 16:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:09.269 16:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:09.527 true 00:09:09.527 16:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:09.527 16:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.464 16:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.464 16:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:10.464 16:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:10.722 true 00:09:10.722 16:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:10.722 16:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.980 16:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.980 16:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:10.980 16:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:11.239 true 00:09:11.239 16:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:11.239 16:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.614 16:51:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.614 16:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:12.614 16:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:12.873 true 00:09:12.873 16:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:12.873 16:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.809 16:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.809 16:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:13.809 16:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:13.809 true 00:09:14.068 16:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:14.068 16:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.068 16:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.327 16:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:14.327 16:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:14.587 true 00:09:14.587 16:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:14.587 16:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.587 16:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.860 [2024-07-15 16:51:21.402438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.402518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.402565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.402608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.402662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.402703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.402747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.402786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.402830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.402871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.402912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.402960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.402998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.403994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.404041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.404078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.404109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.404150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.404191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.404231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.404272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.404318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.404362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.404406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.404453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.404492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.860 [2024-07-15 16:51:21.404533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.404570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.404608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.404637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.404682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.404717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.404755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.404792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.404833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.404870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.404910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.404949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.404984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.405947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.406444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.406490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.406536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.406578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.406622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.406667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.406708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.406751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.406798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.406841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.406882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.406929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.406976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.407974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.408995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.409181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.409231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.409274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.409317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.409363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.409408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.409454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.409497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.409541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.861 [2024-07-15 16:51:21.409584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.409626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.409674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.409714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.409753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.409792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.409832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.409869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.409914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.409959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.410999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.411754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.412526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.412575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.412622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.412674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.412716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.412756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.412797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.412841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.412888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.412928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.412973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.413984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.414022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.414059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.414094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.414130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.414172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.414207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.414249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.414290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.414330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.414368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.414406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.414443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.862 [2024-07-15 16:51:21.414480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.414526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.414572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.414613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.414653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.414693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.414722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.414763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.414800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.414839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.414878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.414916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.414953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.414982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.415912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.416995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.417969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.418997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.419196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.419242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.419287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.419335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.419377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.419415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.419457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.863 [2024-07-15 16:51:21.419501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.419542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.419582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.419617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.419646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.419685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.419722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.419772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.419820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.419862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.419902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.419941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.419976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.420993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.421699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.422497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.422539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.422574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.422611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.422652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.422696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.422733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.422771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.422817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.422862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.422901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.422941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.422981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.864 [2024-07-15 16:51:21.423708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.423746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.423782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.423830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.423871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.423910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.423955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.423992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.424953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.425977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.426961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.427957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.865 [2024-07-15 16:51:21.428686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.428729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.428777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.428822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.428865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.428910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.428953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.428995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.429982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.430984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.431028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 16:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:14.866 16:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:14.866 [2024-07-15 16:51:21.431796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.431842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.431888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.431934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.431976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.432962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.866 [2024-07-15 16:51:21.433778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.433815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.433858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.433897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.433936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.433974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.434954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:14.867 [2024-07-15 16:51:21.435049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.435988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.436978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.437019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.437058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.437098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.437138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.437174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.437970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.438970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.439014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.439058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.439112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.867 [2024-07-15 16:51:21.439149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.439989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.440984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.441028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.441070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.441114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.441156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.441200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.441253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.441297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.441340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.441386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.441434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.441479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.441528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.442986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.868 [2024-07-15 16:51:21.443671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.443709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.443749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.443789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.443830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.443870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.443908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.443946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.443985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.444968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.445015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.445061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.445106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.445165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.445209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.445259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.445306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.445346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.445392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.445439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.445482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.445523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.869 [2024-07-15 16:51:21.445567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.445614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.445656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.445697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.445741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.445777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.445816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.445855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.445884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.445923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.445959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.446994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.447030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.447069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.447108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.447142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.447176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.447216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.447256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.447298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.447341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.448966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.449964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.450008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.450053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.450098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.450141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.450185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.870 [2024-07-15 16:51:21.450239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.450283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.450327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.450378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.450420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.450464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.450516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.450557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.450600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.450645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.450688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.450731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.450773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.450819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.451728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.452977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.453975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.454985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.455022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.455058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.455100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.455131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.455167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.455205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.455251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.455299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.455332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.871 [2024-07-15 16:51:21.455370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.455990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.456982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.457030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.457072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.457113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.457161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.457208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.457257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.457299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.457340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.457384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.457431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.457483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.458974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.459989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.460031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.460077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.460124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.460167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.460211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.460257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.460302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.460349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.460394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.872 [2024-07-15 16:51:21.460445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.460492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.460535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.460588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.460631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.460692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.460738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.460783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.460825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.460870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.460914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.460960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.461837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.462995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.463990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.873 [2024-07-15 16:51:21.464777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.464825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.464874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.464917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.464962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.465992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.466963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.467013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.467053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.467092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.467623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.467665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.467702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.467742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.467791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.467839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.467883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.467930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.467977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.468981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.469025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.469062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.469099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.469128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.874 [2024-07-15 16:51:21.469171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.469990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.470965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.471013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.471063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.471107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.471157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.471197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.471248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.471942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.471990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.472973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.875 [2024-07-15 16:51:21.473920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.473956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.473995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.474970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.475987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.476972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.477018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.477063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.477106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.477149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.477193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.477246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.477750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.477799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.477844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.477886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.477935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.477978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.876 [2024-07-15 16:51:21.478601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.478648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.478687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.478730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.478777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.478820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.478862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.478908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.478956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.479963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.480001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.480037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.480075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.480113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.480155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.480194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.480239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.480276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.480319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.480360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.480556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.480922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.480975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.481955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.482976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.483015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.483055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.483097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.483135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.483174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.483215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.483261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.483302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.483342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.483380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.877 [2024-07-15 16:51:21.483422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.483462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.483498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.483541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:14.878 [2024-07-15 16:51:21.484109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.484985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.485973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.486774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.878 [2024-07-15 16:51:21.487648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.487697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.487740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.487783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.487829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.487870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.487916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.487960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.488982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.489973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.490974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.491019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.491065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.491112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.491155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.491639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.491687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.491738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.491779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.491824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.491869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.491916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.491961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.879 [2024-07-15 16:51:21.492710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.492752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.492799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.492843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.492885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.492921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.492962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.493996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.494964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.495987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.880 [2024-07-15 16:51:21.496576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.496613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.496651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.496690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.496728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.496767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.496806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.496844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.496879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.496921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.496957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.497003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.497800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.497848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.497889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.497934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.497987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.498991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.499979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.500975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.501018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.501062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.501110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.501153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.501198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.501244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.501287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.501342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.501845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.501900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.501945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.501988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.502038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.502080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.881 [2024-07-15 16:51:21.502125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.502990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.503968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.504962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.882 [2024-07-15 16:51:21.505595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.505640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.505689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.505729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.505768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.505804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.505835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.505876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.505918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.505954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.505988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.506983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.507022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.507065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.507105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.507149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.507188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.883 [2024-07-15 16:51:21.508718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.508761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.508808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.508848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.508881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.508920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.508961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.508997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.509995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.510992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.884 [2024-07-15 16:51:21.511891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.511935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.511980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.512971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.513004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.513042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.513082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.513119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.513157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.513194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.513240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.513278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.513318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.513355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.513397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.513436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.513993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.514955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.885 [2024-07-15 16:51:21.515583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.886 [2024-07-15 16:51:21.515619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.886 [2024-07-15 16:51:21.515655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.886 [2024-07-15 16:51:21.515693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.886 [2024-07-15 16:51:21.515733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.886 [2024-07-15 16:51:21.515769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.886 [2024-07-15 16:51:21.515808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.886 [2024-07-15 16:51:21.515847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:14.886 [2024-07-15 16:51:21.515891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.172 [2024-07-15 16:51:21.515932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.172 [2024-07-15 16:51:21.515970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.172 [2024-07-15 16:51:21.516002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.172 [2024-07-15 16:51:21.516042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.172 [2024-07-15 16:51:21.516079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.172 [2024-07-15 16:51:21.516124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.516164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.516200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.516249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.516288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.516332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.516369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.516411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.516449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.516493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.516533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.516570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.516609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.516649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.517982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.518956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.519893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.520693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.520735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.520776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.520815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.520854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.520897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.520934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.520972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.521014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.521055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.521099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.521163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.521205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.521257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.521301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.521343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.521388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.521437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.521484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.521526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.173 [2024-07-15 16:51:21.521573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.521619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.521661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.521705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.521750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.521794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.521838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.521881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.521931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.521977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.522989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.523977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.524977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.525977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.526025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.174 [2024-07-15 16:51:21.526063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.526102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.526145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.526191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.526222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.526269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.527987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.528989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.529990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.530965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.531015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.531060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.531102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.531146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.175 [2024-07-15 16:51:21.531191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.531954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.532688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.533516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.533557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:15.176 [2024-07-15 16:51:21.533598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.533633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.533671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.533708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.533751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.533793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.533850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.533895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.533940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.533988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.534997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.535985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.536029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.536077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.536120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.536165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.536356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.536402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.536446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.536490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.176 [2024-07-15 16:51:21.536542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.536586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.536627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.536672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.536716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.536761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.536803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.536852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.536899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.536946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.536991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.537040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.537095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.537583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.537627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.537667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.537705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.537740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.537778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.537820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.537851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.537890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.537925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.537962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.538978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.539979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.177 [2024-07-15 16:51:21.540855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.540897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.540933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.540967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.541973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.542983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.543792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.543840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.543896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.543956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.544967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.545999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.546044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.546082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.546124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.178 [2024-07-15 16:51:21.546165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.546990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.547038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.547078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.547118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.547156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.547199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.547242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.547279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.547320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.547364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.547404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.547926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.547975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.548998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.549956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.550000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.550056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.550099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.550142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.179 [2024-07-15 16:51:21.550189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.550999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.551988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.552798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.553489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.553541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.553584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.553630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.553679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.553724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.553766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.553824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.553874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.553918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.553963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.554006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.554046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.554085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.554117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.554156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.180 [2024-07-15 16:51:21.554195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.554971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.555976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.556993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.557037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.557076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.557107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.557150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.557190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.557232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.557271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.557317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.557361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.557403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.181 [2024-07-15 16:51:21.557446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.557495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.557535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.557579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.557615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.557647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.557683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.557725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.557762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.558997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.559979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.560023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.560066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.560111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.560150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.560186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.560223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.560267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.560313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.560356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.182 [2024-07-15 16:51:21.560394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.560993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.561955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.562002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.562046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.562092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.562140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.562182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.562231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.562280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.562328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.562376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.562421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.562466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.562516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.562566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.563965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.564008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.564045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.564087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.564127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.564168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.564212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.564258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.564299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.564336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.564373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.183 [2024-07-15 16:51:21.564404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.564999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.565951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.566978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.184 [2024-07-15 16:51:21.567545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.567585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.567626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.567657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.567694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.567736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.567779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.567821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.567864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.567908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.567956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.568754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.569538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.569587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.569628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.569665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.569709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.569749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.569790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.569828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.569869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.569902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.569942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.569981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.570977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.571019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.571072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.571113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.571163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.571208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.571257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.571306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.571349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.571395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.185 [2024-07-15 16:51:21.571441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.571486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.571529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.571573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.571623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.571672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.571714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.571760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.571804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.571855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.571901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.571945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.571991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.572966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.573975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.574991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.575038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.575088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.186 [2024-07-15 16:51:21.575135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.575986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.576969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.577978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.578025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.578069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.578114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.578161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.578208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.578261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.578307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.578352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.578401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.578452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.187 [2024-07-15 16:51:21.578498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.578543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.578588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.578643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.578685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.578730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.578781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.578828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.578871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.578916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.578969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.579013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.579056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.579102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.579143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.579188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.580979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.188 [2024-07-15 16:51:21.581692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.581735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.581779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.581822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.581863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.581900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.581946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.581989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.582960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.583963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.189 [2024-07-15 16:51:21.584975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.585011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.585049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.585087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.585124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.585169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.585218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.585268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.585313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.585356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.585400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.585445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.585488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.585536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:15.190 [2024-07-15 16:51:21.586332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.586981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.587995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.190 [2024-07-15 16:51:21.588788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.588832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.588876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.588919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.588967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.589919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.590996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.591973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.592017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.592066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.592106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.592152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.592202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.592251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.592292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.592342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.592387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.592432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.592480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.592525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.191 [2024-07-15 16:51:21.592568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.592613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.592659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.592699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.592743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.592791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.592835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.592879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.592921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.592970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.593970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.594959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.192 [2024-07-15 16:51:21.595645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.595693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.595739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.595791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.596643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.596688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.596729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.596767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.596807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.596846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.596887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.596923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.596961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.597987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.598956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.193 [2024-07-15 16:51:21.599789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.599830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.599863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.599901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.599937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.599976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.600982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 true 00:09:15.194 [2024-07-15 16:51:21.601794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.601995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.602034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.602073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.602111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.602153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.602183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.194 [2024-07-15 16:51:21.602222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.603966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.604967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.605827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.606003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.606044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.606083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.606120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.606167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.606208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.606255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.195 [2024-07-15 16:51:21.606294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.606327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.606366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.606407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.606449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.606491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.606530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.606568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.606603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.606644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.607979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.608971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.196 [2024-07-15 16:51:21.609797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.609982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.610953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.611981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.612973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.197 [2024-07-15 16:51:21.613524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.613569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.613613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.613658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.613699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.613748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.613802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.613845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.613885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.613940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.613981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.614992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.615984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.616694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.616745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.616785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.616826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.616866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.616896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.616939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.616980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.617018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.617065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.617103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.617140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.617183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.617223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.617268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.617308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.617345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.617382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.198 [2024-07-15 16:51:21.617417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.617463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.617508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.617552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.617596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.617640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.617690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.617732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.617782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.617828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.617880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.617921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.617966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.618996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.619988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.199 [2024-07-15 16:51:21.620707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.620747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.620786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.620817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.620857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.620890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.620931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.620970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.621988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.622986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.200 [2024-07-15 16:51:21.623927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.623974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 16:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:15.201 [2024-07-15 16:51:21.624563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 16:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.201 [2024-07-15 16:51:21.624933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.624987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.625030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.625073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.625124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.625167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.625215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.625265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.625322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.625365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.625414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.625458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.625506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.625555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.625605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.626967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.627001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.627036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.627083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.627127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.627175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.627220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.627274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.627319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.627368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.201 [2024-07-15 16:51:21.627413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.627461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.627506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.627554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.627600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.627645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.627688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.627737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.627780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.627827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.627871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.627918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.627960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.628997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.629974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.202 [2024-07-15 16:51:21.630642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.630689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.630731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.630780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.630824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.630870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.630919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.630968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.631815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.632590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.632633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.632673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.632712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.632751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.632787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.632824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.632861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.632897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.632932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.632970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.633960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.634004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.634057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.634099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.634143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.634185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.634232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.634276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.634325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.634367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.634398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.634437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.634476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.203 [2024-07-15 16:51:21.634513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.634551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.634591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.634630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.634671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.634709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.634748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.634792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.634822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.634860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.634898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.634938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.634973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:15.204 [2024-07-15 16:51:21.635565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.635980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.636982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.204 [2024-07-15 16:51:21.637764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.637806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.637852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.637888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.637925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.637966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.637997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.638035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.638073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.638113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.638876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.638917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.638952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.638986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.639967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.640981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.205 [2024-07-15 16:51:21.641517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.641709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.641750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.641790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.641828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.641868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.641904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.641942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.641983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.642956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.643979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.644021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.644063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.644105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.644149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.644188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.644230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.644266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.644300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.644341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.644379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.645142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.645182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.645220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.645265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.645306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.645346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.645385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.645434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.206 [2024-07-15 16:51:21.645470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.645509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.645546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.645589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.645628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.645664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.645701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.645741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.645785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.645826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.645870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.645917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.645959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.646968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.647850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.648041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.648081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.207 [2024-07-15 16:51:21.648119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.648997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.649991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.650693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.651469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.651512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.651550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.651589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.651624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.651663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.651699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.651734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.651772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.651813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.651854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.651902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.208 [2024-07-15 16:51:21.651939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.651975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.652986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.653976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.654991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.655033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.209 [2024-07-15 16:51:21.655082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.655963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.656967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.657006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.657044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.657822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.657874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.657917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.657957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.657999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.658982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.210 [2024-07-15 16:51:21.659025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.659985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.660988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.661035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.661085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.661128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.661171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.661218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.661268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.661315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.661812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.661860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.661903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.661946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.661988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.211 [2024-07-15 16:51:21.662708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.662745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.662780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.662820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.662862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.662903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.662937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.662976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.663984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.664968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.665998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.212 [2024-07-15 16:51:21.666033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.666980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.667023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.667062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.667098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.667138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.667177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.667216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.668984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.669030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.669075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.669124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.669167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.669211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.669248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.669285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.669327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.669368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.669412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.669455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.669493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.213 [2024-07-15 16:51:21.669531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.669571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.669613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.669657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.669699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.669741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.669772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.669809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.669848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.669886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.669924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.669964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.670982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.671658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.672970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.673011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.673050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.673089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.673122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.673158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.673194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.673240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.673278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.673322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.673367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.214 [2024-07-15 16:51:21.673407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.673995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.674978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.675995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.215 [2024-07-15 16:51:21.676849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.676888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.676929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.676965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.677573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.678962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.679992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.216 [2024-07-15 16:51:21.680798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.680835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.680873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.680913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.681844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.682977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.683988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.217 [2024-07-15 16:51:21.684918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:15.218 [2024-07-15 16:51:21.685240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.685966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.686992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.687709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.688510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.688556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.688598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.688647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.218 [2024-07-15 16:51:21.688693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.688737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.688782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.688836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.688877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.688924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.688970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.689975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.690984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.691963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.692010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.692060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.692576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.692634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.692678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.692724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.692768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.692809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.692857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.692908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.219 [2024-07-15 16:51:21.692955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.693984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.694991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.695947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.220 [2024-07-15 16:51:21.696777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.696823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.696862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.696900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.696939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.696975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.697997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.698854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.698907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.698960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.699968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.700987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.701026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.701066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.701107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.701146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.701188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.701241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.701284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.221 [2024-07-15 16:51:21.701334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.701377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.701419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.701465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.701512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.701698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.701744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.701790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.701837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.701880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.701924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.701968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.702012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.702060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.702103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.702147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.702191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.702242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.702286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.702330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.702372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.702415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.702905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.702954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.702999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.703995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.704965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.222 [2024-07-15 16:51:21.705920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.705963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.706991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.707985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.708027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.708071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.708115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.708160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.708204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.708252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.223 [2024-07-15 16:51:21.709853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.709889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.709921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.709957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.709993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.710988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.711988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.712032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.712082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.712125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.712169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.712215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.712266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.712309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.712355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.712396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.712445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.712488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.712534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.713993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.714033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.714073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.714109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.714147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.714186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.714233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.714268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.224 [2024-07-15 16:51:21.714307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.714973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.715969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.716975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.717991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.718028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.718066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.718105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.718144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.718183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.718230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.718277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.718323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.718368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.718414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.719229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.719282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.719329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.225 [2024-07-15 16:51:21.719374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.719995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.720990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.721897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.722807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.723963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.226 [2024-07-15 16:51:21.724001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.724968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.725930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.227 [2024-07-15 16:51:21.726750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.726792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.726830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.726872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.726922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.726961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.726998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.727999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.728692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.729485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.729533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.729582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.729623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.729668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.729715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.729757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.729796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.729835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.729876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.729914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.729952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.729991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.228 [2024-07-15 16:51:21.730615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.730650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.730684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.730720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.730758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.730799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.730842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.730881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.730918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.730956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.730993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.731980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:15.229 [2024-07-15 16:51:21.732783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.732987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.733986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.229 [2024-07-15 16:51:21.734864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.734909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.734955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.734996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.735960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.736990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.737971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.738012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.738045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.738082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.738119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.738156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.738196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.738668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.738714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.738764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.738811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.738855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.738904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.738949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.230 [2024-07-15 16:51:21.739002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.739985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.740966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.741982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.742030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.742089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.742133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.742180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.742233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.742287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.742332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.231 [2024-07-15 16:51:21.743553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.743589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.743625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.743662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.743707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.743752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.743790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.743829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.743867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.743913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.743944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.743984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.744969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.745959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.746956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.747003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.232 [2024-07-15 16:51:21.747049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.747981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.748020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.748058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.748104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.748148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.748189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.748238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.748278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.748321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.748359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.748399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.748438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.748469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.748508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.749976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.750987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.751032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.751079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.751116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.751153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.751182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.751229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.233 [2024-07-15 16:51:21.751268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.751310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.751347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.751388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.751428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.751474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.751516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.751553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.751596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.751636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.751675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.751709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.751746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.752995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.753970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.754006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.754040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.754079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.754117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.754154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.754193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.754239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.754283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.234 [2024-07-15 16:51:21.754321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.754879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.755668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.755721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.755764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.755810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.755860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.755904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.755948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.755993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.756955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.757968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.758005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.758043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.758082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.758122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.758165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.758209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.758257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.758299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.758336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.758373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.758413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.758607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.235 [2024-07-15 16:51:21.758651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.758685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.758719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.758760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.758800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.758841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.758878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.758915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.758955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.758991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.759028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.759068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.759108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.759143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.759185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.759220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.759722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.759769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.759815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.759863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.759909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.759952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.760988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.761972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.762015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.762058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.762108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.762154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.762197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.762249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.762293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.762341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.236 [2024-07-15 16:51:21.762380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.762427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.762612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.762659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.762702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.762747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.762795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.762838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.762880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.762925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.762969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.763999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.764994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.765970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.766013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.766058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.766117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.766161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.766205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.237 [2024-07-15 16:51:21.766255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.766999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.767990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.768063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.768112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.768154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.768196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.768246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.768293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.768334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.768379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.768429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.768474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.768519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.768564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.769985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.770015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.770055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.770096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.770139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.770181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.770223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.770267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.770305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.770349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.770389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.770428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.770469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.238 [2024-07-15 16:51:21.770501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.770537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.770579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.770620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.770661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.770697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.770738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.770774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.770814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.770856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.770899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.770938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.770981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.771941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.772980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.773021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.773063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.773095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.773133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.239 [2024-07-15 16:51:21.773172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.773982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.774739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.775535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.775578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.775623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.775667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.775709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.775752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.775796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.775835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.775868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.775908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.775945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.775982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.776981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.240 [2024-07-15 16:51:21.777689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.777734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.777778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.777828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.777876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.777918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.777966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.778988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.779972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.780972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.241 [2024-07-15 16:51:21.781850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.781896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.781957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:15.242 [2024-07-15 16:51:21.782842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.782960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.783970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.784999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.785041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.785090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.785133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.785178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.785983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.786026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.786071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.786110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.786154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.786195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.242 [2024-07-15 16:51:21.786241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.786973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.787969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.788982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.243 [2024-07-15 16:51:21.789888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.789931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.789978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.790969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.791536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.792976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.244 [2024-07-15 16:51:21.793809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.793852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.793894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.793925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.793962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.793999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.794039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.794075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.794114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.794153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.794192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.794235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.794273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.794314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.794353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.794392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.794435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.794476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.245 [2024-07-15 16:51:21.794515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.525 16:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.525 [2024-07-15 16:51:22.005995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.006996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.007957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.525 [2024-07-15 16:51:22.008615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.008798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.008839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.008880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.008919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.008954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.008991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.009027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.009061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.009097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.009128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.009170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.009215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.009265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.009307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.009350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.009391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.009436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.009932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.009983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.010971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.011988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.012991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.526 [2024-07-15 16:51:22.013693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.013741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.013784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.013827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.013875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.013915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.013957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.013998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.014970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.015008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.015047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.015084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.015127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.015173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.015215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.015260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.015291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.015850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.015893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.015933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.015978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.016983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.017974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.527 [2024-07-15 16:51:22.018966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.019972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.020990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.021600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.022456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.022501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.022539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.022575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.022614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.022650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.022694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.022741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.022782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.022823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.022869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.022919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.022962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.023983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.024019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.024057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.024103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.024141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.024177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.528 [2024-07-15 16:51:22.024216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.024969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.025942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.026435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.026483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.026528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.026564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.026604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.026643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.026673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.026714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.026753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.529 [2024-07-15 16:51:22.026788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.026828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.026873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.026921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.026967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.027991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.530 [2024-07-15 16:51:22.028672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.028717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.028761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.028807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.028853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.028899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.028944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.028986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.029969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.030995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.031876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.032663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.032704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.032745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.032783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.032824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.032864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.032898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.032936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.032973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.531 [2024-07-15 16:51:22.033855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.033896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.033939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.033984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.034990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.035986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 16:51:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:15.532 [2024-07-15 16:51:22.036027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 16:51:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:15.532 [2024-07-15 16:51:22.036570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.036976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.037984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.532 [2024-07-15 16:51:22.038811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.038859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.038902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.038947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.038991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.039998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.040996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.041041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.041084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.041116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.041153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.041193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.041234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.042978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.043982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.044029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.044066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.044103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.044138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.044176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.044206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.044247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.533 [2024-07-15 16:51:22.044283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.044974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.045633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.046987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.047963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.534 [2024-07-15 16:51:22.048679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.048723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.048768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.048945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.048988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.049968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.050998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.051042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.051083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.051120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.051163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.051198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.051247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.051290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.051346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.051393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.051434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.051478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.051521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.052997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.535 [2024-07-15 16:51:22.053882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.053921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.053958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.053996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.054969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:15.536 [2024-07-15 16:51:22.055488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.055896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.056988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.057969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.536 [2024-07-15 16:51:22.058798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.058836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.058870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.058904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.059972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.060962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.061737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.061790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.061834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.061876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.061919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.061961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.062970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.537 [2024-07-15 16:51:22.063957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.063997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.064995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.065995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.066988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.067032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.067080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.067125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.067167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.067964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.068972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.069007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.538 [2024-07-15 16:51:22.069045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.069977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.070989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.071036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.071080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.071123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.071178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.071223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.071271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.071314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.071356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.071397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.071443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.071492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.071959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.072996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.073034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.073078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.073129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.073172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.073215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.073270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.073316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.073362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.073405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.073461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.539 [2024-07-15 16:51:22.073504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.073548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.073597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.073639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.073681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.073725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.073771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.073815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.073858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.073907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.073957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.074980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.075998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.076999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.077042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.077084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.077131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.077173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.077216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.077266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.077314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.077360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.077404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.077449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.540 [2024-07-15 16:51:22.078782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.078820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.078863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.078906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.078943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.078980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.079955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.080880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.081704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.082996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.541 [2024-07-15 16:51:22.083627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.083675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.083722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.083766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.083813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.083855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.083901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.083948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.083990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.084770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.085985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.086980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.087601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.088423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.088477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.088522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.088564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.088611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.088661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.088706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.088751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.088796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.088852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.088896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.088940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.088985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.542 [2024-07-15 16:51:22.089029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.089984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.090969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.091968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.092013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.092054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.092100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.092143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.092186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.092243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.092294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.092337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.092381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.092428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.092468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.543 [2024-07-15 16:51:22.092511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.092553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.092598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.092644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.092689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.092736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.092783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.092827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.092879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.092921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.092959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.092997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.093949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.094824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.094873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.094918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.094966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.095997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.096988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.544 [2024-07-15 16:51:22.097884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.097922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.097959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.097996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.098040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.098084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.098132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.098180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.098232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.098276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.098325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.098799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.098851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.098898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.098944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.098989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.099982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.100985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.101987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.102950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.545 [2024-07-15 16:51:22.103003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.103996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.104031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.104062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.104102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.104140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.104182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.104221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.104263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:15.546 [2024-07-15 16:51:22.105077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.105960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.106986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.107023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.107062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.107101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.107143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.107179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.107219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.546 [2024-07-15 16:51:22.107266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.107956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.108647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.109998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.110991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.111967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.112002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.112038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.112074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.112116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.112164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.112205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.547 [2024-07-15 16:51:22.112249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.112999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.113969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.114005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.114044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.114086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.114130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.114168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.114205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.114250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.114294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.114339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.114384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.114429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.114471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.114521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.115989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.116034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.116077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.116123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.116177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.116218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.116268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.116313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.116357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.116402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.548 [2024-07-15 16:51:22.116447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.116495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.116538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.116582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.116629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.116674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.116723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.116769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.116817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.116867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.116915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.116960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.117998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.118835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.119965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.549 [2024-07-15 16:51:22.120585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.120629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.120671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.120709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.120749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.120786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.120826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.120865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.120911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.120950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.120994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.121951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.122965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.123673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.124984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.125026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.125068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.125112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.125156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.125202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.125252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.125301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.125344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.125389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.125435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.125478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.125520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.550 [2024-07-15 16:51:22.125568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.125610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.125640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.125682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.125721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.125762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.125805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.125847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.125888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.125934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.125977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.126984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.127973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.128963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.129001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.129046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.129085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.129124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.129156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.129194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.129237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.129277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.129315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.129352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.551 [2024-07-15 16:51:22.129386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.129428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.129466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.129506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.129544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.129587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.129626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.129667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.129704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.129744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.129782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.129823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.130633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.130682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.130725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.130768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.130814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.130862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.130908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.130956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.130999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.131987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.132972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.133977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.134020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.134069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.134112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.134155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.134199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.134698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.134741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.134785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.134822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.134855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.552 [2024-07-15 16:51:22.134891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.134929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.134966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.135959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.136988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.553 [2024-07-15 16:51:22.137868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.137906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.137945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.137989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.138960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.139999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.140819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.140868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.140910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.140971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.141995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.554 [2024-07-15 16:51:22.142868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.142915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.142966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.143973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.144997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.145987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.146965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.555 [2024-07-15 16:51:22.147980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.148997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.149038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.149082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.149119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.149154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.149206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.149250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.149287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.149325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.149365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.149408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.149450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.149497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.149540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.150982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.151968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.152980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.153167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.153216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.556 [2024-07-15 16:51:22.153267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:15.557 [2024-07-15 16:51:22.153592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.153997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.154979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.155788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.156571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.156618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.156662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.156705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.156747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.156788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.156826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.156864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.156901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.156942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.156981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.557 [2024-07-15 16:51:22.157675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.157717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.157761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.157803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.157845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.157888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.157931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.157974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.158987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.159969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.160962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.161991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.558 [2024-07-15 16:51:22.162793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.162832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.162867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.162906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.162953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.162995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.163974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.164975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.165019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.165069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.165112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.165157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.165201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.165248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.165987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.166982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.559 [2024-07-15 16:51:22.167994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.168968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.169974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.170983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.171027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.171072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.171116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.171163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.171212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.171263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.171308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.171356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.171405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.171449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.171491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.172983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.173016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.173054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.173093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.173132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.173171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.173209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.173256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.173299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.173343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.560 [2024-07-15 16:51:22.173380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.173957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.174009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.174051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.561 [2024-07-15 16:51:22.174095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.824 [2024-07-15 16:51:22.174141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.824 [2024-07-15 16:51:22.174189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.824 [2024-07-15 16:51:22.174241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.824 [2024-07-15 16:51:22.174283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.824 [2024-07-15 16:51:22.174327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.174915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.175719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.176989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.177983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.178025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.178068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.178117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.178160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.178205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.825 [2024-07-15 16:51:22.178260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.178917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.179980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.180963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.181006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.181053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.181746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.181797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.181839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.181881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.181919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.181963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.182969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.183007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.183045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.183081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.183116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.183157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.183194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.183240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.183287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.183338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.183383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.183425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.826 [2024-07-15 16:51:22.183480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.183524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.183568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.183615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.183660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.183708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.183755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.183800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.183844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.183891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.183934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.183980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.184981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.185969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.186981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.187023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.187067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.187111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.187154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.187204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.188015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.188073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.188119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.188162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.188195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.188241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.827 [2024-07-15 16:51:22.188280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.188988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.189997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.190970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.191996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.192991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.193028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.193067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.193108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.193158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.193203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.828 [2024-07-15 16:51:22.193252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.193995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.194993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.195960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.196006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.196047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.196090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.196127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.196164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.196202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.196242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.196282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.196324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.196365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.196405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.196438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.829 [2024-07-15 16:51:22.196469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.196510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.196551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.196595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.196635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.196680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.197999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.198975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.199899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.200809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.201550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.201594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.201636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.201672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.201711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.201753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.201796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.201837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.201874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.201904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.201942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.201986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.202032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.202076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.202115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.202158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.202205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.202252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.202295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.830 [2024-07-15 16:51:22.202344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.202994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.203965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:15.831 [2024-07-15 16:51:22.204572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.204957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.205970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.206012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.206048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.206086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.206123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.206159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.206201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.206252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.206303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.206346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.206391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.206434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.831 [2024-07-15 16:51:22.206483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.206530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.206577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.206628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.206671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.206716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.206763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.206805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.206851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.206900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.206944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.206984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.207030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.207076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 [2024-07-15 16:51:22.207572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:15.832 true 00:09:15.832 16:51:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:15.832 16:51:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.768 16:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.768 Initializing NVMe Controllers 00:09:16.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:16.768 Controller IO queue size 128, less than required. 00:09:16.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:16.768 Controller IO queue size 128, less than required. 00:09:16.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:16.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:16.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:16.768 Initialization complete. Launching workers. 00:09:16.768 ======================================================== 00:09:16.768 Latency(us) 00:09:16.768 Device Information : IOPS MiB/s Average min max 00:09:16.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2542.60 1.24 34097.13 1758.96 1067374.14 00:09:16.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17071.77 8.34 7478.50 1594.02 383136.84 00:09:16.768 ======================================================== 00:09:16.768 Total : 19614.37 9.58 10929.06 1594.02 1067374.14 00:09:16.768 00:09:16.768 16:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:16.768 16:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:17.027 true 00:09:17.027 16:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4162690 00:09:17.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4162690) - No such process 00:09:17.027 16:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4162690 00:09:17.027 16:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.285 16:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.544 16:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:17.544 16:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:17.544 16:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:17.544 16:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.544 16:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:17.544 null0 00:09:17.544 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.544 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.544 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:17.803 null1 00:09:17.803 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.803 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.803 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:18.063 null2 00:09:18.063 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.063 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.063 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:18.063 null3 00:09:18.063 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.063 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.063 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:18.322 null4 00:09:18.322 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.322 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.322 16:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:18.579 null5 00:09:18.579 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.579 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.579 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:18.579 null6 00:09:18.838 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:18.839 null7 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4168781 4168782 4168786 4168789 4168792 4168794 4168796 4168799 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.839 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.098 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.098 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.098 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.098 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.098 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.098 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.098 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.098 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.358 16:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.358 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.876 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.876 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.876 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.876 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.876 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.876 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.876 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.876 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.876 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.876 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.876 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.134 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.393 16:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.393 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.393 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.393 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.652 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.652 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.652 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.652 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.652 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.652 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.652 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.652 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.652 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.652 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.652 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.910 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.168 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.425 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.425 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.425 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.425 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.425 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.425 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.425 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.425 16:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.425 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.425 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.425 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.684 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.685 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.685 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.685 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.943 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.201 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.202 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:22.202 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.202 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.202 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:22.202 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.202 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.202 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.202 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:22.202 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.202 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:22.460 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.460 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.460 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:22.460 16:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.460 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:22.460 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:22.460 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:22.460 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:22.460 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:22.460 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.460 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:22.726 rmmod nvme_tcp 00:09:22.726 rmmod nvme_fabrics 00:09:22.726 rmmod nvme_keyring 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 4162237 ']' 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 4162237 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 4162237 ']' 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 4162237 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4162237 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4162237' 00:09:22.726 killing process with pid 4162237 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 4162237 00:09:22.726 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 4162237 00:09:22.986 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:22.986 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:22.986 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:22.986 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:22.986 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:22.986 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.986 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.986 16:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.524 16:51:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:25.524 00:09:25.524 real 0m47.061s 00:09:25.524 user 3m13.436s 00:09:25.524 sys 0m15.435s 00:09:25.524 16:51:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.524 16:51:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.524 ************************************ 00:09:25.524 END TEST nvmf_ns_hotplug_stress 00:09:25.524 ************************************ 00:09:25.525 16:51:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:25.525 16:51:31 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:25.525 16:51:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:25.525 16:51:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.525 16:51:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:25.525 ************************************ 00:09:25.525 START TEST nvmf_connect_stress 00:09:25.525 ************************************ 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:25.525 * Looking for test storage... 00:09:25.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:25.525 16:51:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.880 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.880 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.880 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.880 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.880 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.880 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.880 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.880 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:30.881 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:30.881 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:30.881 Found net devices under 0000:86:00.0: cvl_0_0 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:30.881 Found net devices under 0000:86:00.1: cvl_0_1 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:30.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:09:30.881 00:09:30.881 --- 10.0.0.2 ping statistics --- 00:09:30.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.881 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:09:30.881 00:09:30.881 --- 10.0.0.1 ping statistics --- 00:09:30.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.881 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:30.881 16:51:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:30.881 16:51:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:30.881 16:51:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.881 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:30.881 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.881 16:51:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=4172986 00:09:30.881 16:51:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 4172986 00:09:30.881 16:51:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:30.881 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 4172986 ']' 00:09:30.881 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.881 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.881 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.881 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.881 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.881 [2024-07-15 16:51:37.073788] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:09:30.881 [2024-07-15 16:51:37.073830] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.881 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.881 [2024-07-15 16:51:37.130877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:30.881 [2024-07-15 16:51:37.210116] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.881 [2024-07-15 16:51:37.210150] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.882 [2024-07-15 16:51:37.210157] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.882 [2024-07-15 16:51:37.210163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.882 [2024-07-15 16:51:37.210169] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.882 [2024-07-15 16:51:37.210205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.882 [2024-07-15 16:51:37.210288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.882 [2024-07-15 16:51:37.210291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.450 [2024-07-15 16:51:37.931095] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.450 [2024-07-15 16:51:37.973325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.450 NULL1 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4173233 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.450 16:51:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.018 16:51:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.018 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:32.018 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.018 16:51:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.018 16:51:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.276 16:51:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.276 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:32.276 16:51:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.276 16:51:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.276 16:51:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.534 16:51:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.534 16:51:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:32.534 16:51:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.534 16:51:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.534 16:51:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 16:51:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.792 16:51:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:32.792 16:51:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.792 16:51:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.792 16:51:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.049 16:51:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.049 16:51:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:33.049 16:51:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.049 16:51:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.049 16:51:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.615 16:51:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.615 16:51:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:33.615 16:51:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.615 16:51:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.615 16:51:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.875 16:51:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.875 16:51:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:33.875 16:51:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.875 16:51:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.875 16:51:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.134 16:51:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.134 16:51:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:34.134 16:51:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.134 16:51:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.134 16:51:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.393 16:51:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.394 16:51:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:34.394 16:51:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.394 16:51:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.394 16:51:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.653 16:51:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.653 16:51:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:34.653 16:51:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.653 16:51:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.653 16:51:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.220 16:51:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.220 16:51:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:35.220 16:51:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.220 16:51:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.220 16:51:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.478 16:51:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.478 16:51:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:35.478 16:51:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.478 16:51:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.478 16:51:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.736 16:51:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.736 16:51:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:35.736 16:51:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.736 16:51:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.736 16:51:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.993 16:51:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.993 16:51:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:35.993 16:51:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.993 16:51:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.993 16:51:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.558 16:51:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.558 16:51:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:36.558 16:51:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.558 16:51:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.558 16:51:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.815 16:51:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.815 16:51:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:36.815 16:51:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.815 16:51:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.815 16:51:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.073 16:51:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.073 16:51:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:37.073 16:51:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.073 16:51:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.073 16:51:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.330 16:51:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.330 16:51:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:37.330 16:51:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.330 16:51:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.330 16:51:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.587 16:51:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.587 16:51:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:37.587 16:51:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.587 16:51:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.587 16:51:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.152 16:51:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.152 16:51:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:38.152 16:51:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.152 16:51:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.152 16:51:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.410 16:51:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.410 16:51:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:38.410 16:51:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.410 16:51:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.410 16:51:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.668 16:51:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.668 16:51:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:38.668 16:51:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.668 16:51:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.668 16:51:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.926 16:51:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.926 16:51:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:38.926 16:51:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.926 16:51:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.926 16:51:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.493 16:51:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.493 16:51:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:39.493 16:51:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:39.493 16:51:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.493 16:51:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.751 16:51:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.751 16:51:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:39.751 16:51:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:39.751 16:51:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.751 16:51:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.009 16:51:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.009 16:51:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:40.009 16:51:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:40.009 16:51:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.009 16:51:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.267 16:51:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.267 16:51:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:40.267 16:51:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:40.267 16:51:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.267 16:51:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.525 16:51:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.525 16:51:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:40.525 16:51:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:40.525 16:51:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.525 16:51:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.091 16:51:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.091 16:51:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:41.091 16:51:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.091 16:51:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.091 16:51:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.349 16:51:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.349 16:51:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:41.349 16:51:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.349 16:51:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.349 16:51:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.606 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4173233 00:09:41.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4173233) - No such process 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4173233 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:41.606 rmmod nvme_tcp 00:09:41.606 rmmod nvme_fabrics 00:09:41.606 rmmod nvme_keyring 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 4172986 ']' 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 4172986 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 4172986 ']' 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 4172986 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:41.606 16:51:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4172986 00:09:41.865 16:51:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:41.865 16:51:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:41.865 16:51:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4172986' 00:09:41.865 killing process with pid 4172986 00:09:41.865 16:51:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 4172986 00:09:41.865 16:51:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 4172986 00:09:41.865 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:41.865 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:41.865 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:41.865 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:41.865 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:41.865 16:51:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.865 16:51:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.865 16:51:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.397 16:51:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.397 00:09:44.397 real 0m18.849s 00:09:44.397 user 0m41.216s 00:09:44.397 sys 0m7.778s 00:09:44.397 16:51:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.397 16:51:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:44.397 ************************************ 00:09:44.397 END TEST nvmf_connect_stress 00:09:44.397 ************************************ 00:09:44.397 16:51:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:44.397 16:51:50 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:44.397 16:51:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:44.397 16:51:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.397 16:51:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.397 ************************************ 00:09:44.397 START TEST nvmf_fused_ordering 00:09:44.397 ************************************ 00:09:44.397 16:51:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:44.397 * Looking for test storage... 00:09:44.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.397 16:51:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.397 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:44.398 16:51:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.686 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:49.687 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:49.687 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:49.687 Found net devices under 0000:86:00.0: cvl_0_0 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:49.687 Found net devices under 0000:86:00.1: cvl_0_1 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.687 16:51:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:49.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:09:49.687 00:09:49.687 --- 10.0.0.2 ping statistics --- 00:09:49.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.687 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:09:49.687 00:09:49.687 --- 10.0.0.1 ping statistics --- 00:09:49.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.687 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=4178379 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 4178379 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 4178379 ']' 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.687 16:51:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:49.687 [2024-07-15 16:51:56.290420] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:09:49.687 [2024-07-15 16:51:56.290464] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.687 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.687 [2024-07-15 16:51:56.347761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.946 [2024-07-15 16:51:56.427299] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.946 [2024-07-15 16:51:56.427331] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.946 [2024-07-15 16:51:56.427338] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.946 [2024-07-15 16:51:56.427344] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.946 [2024-07-15 16:51:56.427349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.946 [2024-07-15 16:51:56.427364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:50.510 [2024-07-15 16:51:57.134378] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:50.510 [2024-07-15 16:51:57.150508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:50.510 NULL1 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.510 16:51:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:50.768 [2024-07-15 16:51:57.206014] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:09:50.768 [2024-07-15 16:51:57.206059] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4178626 ] 00:09:50.768 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.026 Attached to nqn.2016-06.io.spdk:cnode1 00:09:51.026 Namespace ID: 1 size: 1GB 00:09:51.026 fused_ordering(0) 00:09:51.026 fused_ordering(1) 00:09:51.026 fused_ordering(2) 00:09:51.026 fused_ordering(3) 00:09:51.026 fused_ordering(4) 00:09:51.026 fused_ordering(5) 00:09:51.026 fused_ordering(6) 00:09:51.026 fused_ordering(7) 00:09:51.026 fused_ordering(8) 00:09:51.026 fused_ordering(9) 00:09:51.026 fused_ordering(10) 00:09:51.026 fused_ordering(11) 00:09:51.026 fused_ordering(12) 00:09:51.026 fused_ordering(13) 00:09:51.026 fused_ordering(14) 00:09:51.026 fused_ordering(15) 00:09:51.026 fused_ordering(16) 00:09:51.026 fused_ordering(17) 00:09:51.026 fused_ordering(18) 00:09:51.026 fused_ordering(19) 00:09:51.026 fused_ordering(20) 00:09:51.026 fused_ordering(21) 00:09:51.026 fused_ordering(22) 00:09:51.026 fused_ordering(23) 00:09:51.026 fused_ordering(24) 00:09:51.026 fused_ordering(25) 00:09:51.026 fused_ordering(26) 00:09:51.026 fused_ordering(27) 00:09:51.026 fused_ordering(28) 00:09:51.026 fused_ordering(29) 00:09:51.026 fused_ordering(30) 00:09:51.026 fused_ordering(31) 00:09:51.026 fused_ordering(32) 00:09:51.026 fused_ordering(33) 00:09:51.026 fused_ordering(34) 00:09:51.026 fused_ordering(35) 00:09:51.026 fused_ordering(36) 00:09:51.026 fused_ordering(37) 00:09:51.026 fused_ordering(38) 00:09:51.026 fused_ordering(39) 00:09:51.026 fused_ordering(40) 00:09:51.026 fused_ordering(41) 00:09:51.026 fused_ordering(42) 00:09:51.026 fused_ordering(43) 00:09:51.026 fused_ordering(44) 00:09:51.026 fused_ordering(45) 00:09:51.026 fused_ordering(46) 00:09:51.026 fused_ordering(47) 00:09:51.027 fused_ordering(48) 00:09:51.027 fused_ordering(49) 00:09:51.027 fused_ordering(50) 00:09:51.027 fused_ordering(51) 00:09:51.027 fused_ordering(52) 00:09:51.027 fused_ordering(53) 00:09:51.027 fused_ordering(54) 00:09:51.027 fused_ordering(55) 00:09:51.027 fused_ordering(56) 00:09:51.027 fused_ordering(57) 00:09:51.027 fused_ordering(58) 00:09:51.027 fused_ordering(59) 00:09:51.027 fused_ordering(60) 00:09:51.027 fused_ordering(61) 00:09:51.027 fused_ordering(62) 00:09:51.027 fused_ordering(63) 00:09:51.027 fused_ordering(64) 00:09:51.027 fused_ordering(65) 00:09:51.027 fused_ordering(66) 00:09:51.027 fused_ordering(67) 00:09:51.027 fused_ordering(68) 00:09:51.027 fused_ordering(69) 00:09:51.027 fused_ordering(70) 00:09:51.027 fused_ordering(71) 00:09:51.027 fused_ordering(72) 00:09:51.027 fused_ordering(73) 00:09:51.027 fused_ordering(74) 00:09:51.027 fused_ordering(75) 00:09:51.027 fused_ordering(76) 00:09:51.027 fused_ordering(77) 00:09:51.027 fused_ordering(78) 00:09:51.027 fused_ordering(79) 00:09:51.027 fused_ordering(80) 00:09:51.027 fused_ordering(81) 00:09:51.027 fused_ordering(82) 00:09:51.027 fused_ordering(83) 00:09:51.027 fused_ordering(84) 00:09:51.027 fused_ordering(85) 00:09:51.027 fused_ordering(86) 00:09:51.027 fused_ordering(87) 00:09:51.027 fused_ordering(88) 00:09:51.027 fused_ordering(89) 00:09:51.027 fused_ordering(90) 00:09:51.027 fused_ordering(91) 00:09:51.027 fused_ordering(92) 00:09:51.027 fused_ordering(93) 00:09:51.027 fused_ordering(94) 00:09:51.027 fused_ordering(95) 00:09:51.027 fused_ordering(96) 00:09:51.027 fused_ordering(97) 00:09:51.027 fused_ordering(98) 00:09:51.027 fused_ordering(99) 00:09:51.027 fused_ordering(100) 00:09:51.027 fused_ordering(101) 00:09:51.027 fused_ordering(102) 00:09:51.027 fused_ordering(103) 00:09:51.027 fused_ordering(104) 00:09:51.027 fused_ordering(105) 00:09:51.027 fused_ordering(106) 00:09:51.027 fused_ordering(107) 00:09:51.027 fused_ordering(108) 00:09:51.027 fused_ordering(109) 00:09:51.027 fused_ordering(110) 00:09:51.027 fused_ordering(111) 00:09:51.027 fused_ordering(112) 00:09:51.027 fused_ordering(113) 00:09:51.027 fused_ordering(114) 00:09:51.027 fused_ordering(115) 00:09:51.027 fused_ordering(116) 00:09:51.027 fused_ordering(117) 00:09:51.027 fused_ordering(118) 00:09:51.027 fused_ordering(119) 00:09:51.027 fused_ordering(120) 00:09:51.027 fused_ordering(121) 00:09:51.027 fused_ordering(122) 00:09:51.027 fused_ordering(123) 00:09:51.027 fused_ordering(124) 00:09:51.027 fused_ordering(125) 00:09:51.027 fused_ordering(126) 00:09:51.027 fused_ordering(127) 00:09:51.027 fused_ordering(128) 00:09:51.027 fused_ordering(129) 00:09:51.027 fused_ordering(130) 00:09:51.027 fused_ordering(131) 00:09:51.027 fused_ordering(132) 00:09:51.027 fused_ordering(133) 00:09:51.027 fused_ordering(134) 00:09:51.027 fused_ordering(135) 00:09:51.027 fused_ordering(136) 00:09:51.027 fused_ordering(137) 00:09:51.027 fused_ordering(138) 00:09:51.027 fused_ordering(139) 00:09:51.027 fused_ordering(140) 00:09:51.027 fused_ordering(141) 00:09:51.027 fused_ordering(142) 00:09:51.027 fused_ordering(143) 00:09:51.027 fused_ordering(144) 00:09:51.027 fused_ordering(145) 00:09:51.027 fused_ordering(146) 00:09:51.027 fused_ordering(147) 00:09:51.027 fused_ordering(148) 00:09:51.027 fused_ordering(149) 00:09:51.027 fused_ordering(150) 00:09:51.027 fused_ordering(151) 00:09:51.027 fused_ordering(152) 00:09:51.027 fused_ordering(153) 00:09:51.027 fused_ordering(154) 00:09:51.027 fused_ordering(155) 00:09:51.027 fused_ordering(156) 00:09:51.027 fused_ordering(157) 00:09:51.027 fused_ordering(158) 00:09:51.027 fused_ordering(159) 00:09:51.027 fused_ordering(160) 00:09:51.027 fused_ordering(161) 00:09:51.027 fused_ordering(162) 00:09:51.027 fused_ordering(163) 00:09:51.027 fused_ordering(164) 00:09:51.027 fused_ordering(165) 00:09:51.027 fused_ordering(166) 00:09:51.027 fused_ordering(167) 00:09:51.027 fused_ordering(168) 00:09:51.027 fused_ordering(169) 00:09:51.027 fused_ordering(170) 00:09:51.027 fused_ordering(171) 00:09:51.027 fused_ordering(172) 00:09:51.027 fused_ordering(173) 00:09:51.027 fused_ordering(174) 00:09:51.027 fused_ordering(175) 00:09:51.027 fused_ordering(176) 00:09:51.027 fused_ordering(177) 00:09:51.027 fused_ordering(178) 00:09:51.027 fused_ordering(179) 00:09:51.027 fused_ordering(180) 00:09:51.027 fused_ordering(181) 00:09:51.027 fused_ordering(182) 00:09:51.027 fused_ordering(183) 00:09:51.027 fused_ordering(184) 00:09:51.027 fused_ordering(185) 00:09:51.027 fused_ordering(186) 00:09:51.027 fused_ordering(187) 00:09:51.027 fused_ordering(188) 00:09:51.027 fused_ordering(189) 00:09:51.027 fused_ordering(190) 00:09:51.027 fused_ordering(191) 00:09:51.027 fused_ordering(192) 00:09:51.027 fused_ordering(193) 00:09:51.027 fused_ordering(194) 00:09:51.027 fused_ordering(195) 00:09:51.027 fused_ordering(196) 00:09:51.027 fused_ordering(197) 00:09:51.027 fused_ordering(198) 00:09:51.027 fused_ordering(199) 00:09:51.027 fused_ordering(200) 00:09:51.027 fused_ordering(201) 00:09:51.027 fused_ordering(202) 00:09:51.027 fused_ordering(203) 00:09:51.027 fused_ordering(204) 00:09:51.027 fused_ordering(205) 00:09:51.286 fused_ordering(206) 00:09:51.286 fused_ordering(207) 00:09:51.286 fused_ordering(208) 00:09:51.286 fused_ordering(209) 00:09:51.286 fused_ordering(210) 00:09:51.286 fused_ordering(211) 00:09:51.286 fused_ordering(212) 00:09:51.286 fused_ordering(213) 00:09:51.286 fused_ordering(214) 00:09:51.286 fused_ordering(215) 00:09:51.286 fused_ordering(216) 00:09:51.286 fused_ordering(217) 00:09:51.286 fused_ordering(218) 00:09:51.286 fused_ordering(219) 00:09:51.286 fused_ordering(220) 00:09:51.286 fused_ordering(221) 00:09:51.286 fused_ordering(222) 00:09:51.286 fused_ordering(223) 00:09:51.286 fused_ordering(224) 00:09:51.286 fused_ordering(225) 00:09:51.286 fused_ordering(226) 00:09:51.286 fused_ordering(227) 00:09:51.286 fused_ordering(228) 00:09:51.286 fused_ordering(229) 00:09:51.286 fused_ordering(230) 00:09:51.286 fused_ordering(231) 00:09:51.286 fused_ordering(232) 00:09:51.286 fused_ordering(233) 00:09:51.286 fused_ordering(234) 00:09:51.286 fused_ordering(235) 00:09:51.286 fused_ordering(236) 00:09:51.286 fused_ordering(237) 00:09:51.286 fused_ordering(238) 00:09:51.286 fused_ordering(239) 00:09:51.286 fused_ordering(240) 00:09:51.286 fused_ordering(241) 00:09:51.286 fused_ordering(242) 00:09:51.286 fused_ordering(243) 00:09:51.286 fused_ordering(244) 00:09:51.286 fused_ordering(245) 00:09:51.286 fused_ordering(246) 00:09:51.286 fused_ordering(247) 00:09:51.286 fused_ordering(248) 00:09:51.286 fused_ordering(249) 00:09:51.286 fused_ordering(250) 00:09:51.286 fused_ordering(251) 00:09:51.286 fused_ordering(252) 00:09:51.286 fused_ordering(253) 00:09:51.286 fused_ordering(254) 00:09:51.286 fused_ordering(255) 00:09:51.286 fused_ordering(256) 00:09:51.286 fused_ordering(257) 00:09:51.286 fused_ordering(258) 00:09:51.286 fused_ordering(259) 00:09:51.286 fused_ordering(260) 00:09:51.286 fused_ordering(261) 00:09:51.286 fused_ordering(262) 00:09:51.286 fused_ordering(263) 00:09:51.286 fused_ordering(264) 00:09:51.286 fused_ordering(265) 00:09:51.286 fused_ordering(266) 00:09:51.286 fused_ordering(267) 00:09:51.286 fused_ordering(268) 00:09:51.286 fused_ordering(269) 00:09:51.286 fused_ordering(270) 00:09:51.286 fused_ordering(271) 00:09:51.286 fused_ordering(272) 00:09:51.286 fused_ordering(273) 00:09:51.286 fused_ordering(274) 00:09:51.286 fused_ordering(275) 00:09:51.286 fused_ordering(276) 00:09:51.286 fused_ordering(277) 00:09:51.286 fused_ordering(278) 00:09:51.286 fused_ordering(279) 00:09:51.286 fused_ordering(280) 00:09:51.286 fused_ordering(281) 00:09:51.286 fused_ordering(282) 00:09:51.286 fused_ordering(283) 00:09:51.286 fused_ordering(284) 00:09:51.286 fused_ordering(285) 00:09:51.286 fused_ordering(286) 00:09:51.286 fused_ordering(287) 00:09:51.286 fused_ordering(288) 00:09:51.286 fused_ordering(289) 00:09:51.286 fused_ordering(290) 00:09:51.286 fused_ordering(291) 00:09:51.286 fused_ordering(292) 00:09:51.286 fused_ordering(293) 00:09:51.286 fused_ordering(294) 00:09:51.286 fused_ordering(295) 00:09:51.286 fused_ordering(296) 00:09:51.286 fused_ordering(297) 00:09:51.286 fused_ordering(298) 00:09:51.286 fused_ordering(299) 00:09:51.286 fused_ordering(300) 00:09:51.286 fused_ordering(301) 00:09:51.286 fused_ordering(302) 00:09:51.286 fused_ordering(303) 00:09:51.286 fused_ordering(304) 00:09:51.286 fused_ordering(305) 00:09:51.286 fused_ordering(306) 00:09:51.286 fused_ordering(307) 00:09:51.286 fused_ordering(308) 00:09:51.286 fused_ordering(309) 00:09:51.286 fused_ordering(310) 00:09:51.286 fused_ordering(311) 00:09:51.286 fused_ordering(312) 00:09:51.286 fused_ordering(313) 00:09:51.286 fused_ordering(314) 00:09:51.286 fused_ordering(315) 00:09:51.286 fused_ordering(316) 00:09:51.286 fused_ordering(317) 00:09:51.286 fused_ordering(318) 00:09:51.286 fused_ordering(319) 00:09:51.286 fused_ordering(320) 00:09:51.286 fused_ordering(321) 00:09:51.286 fused_ordering(322) 00:09:51.286 fused_ordering(323) 00:09:51.286 fused_ordering(324) 00:09:51.286 fused_ordering(325) 00:09:51.286 fused_ordering(326) 00:09:51.286 fused_ordering(327) 00:09:51.286 fused_ordering(328) 00:09:51.286 fused_ordering(329) 00:09:51.286 fused_ordering(330) 00:09:51.286 fused_ordering(331) 00:09:51.286 fused_ordering(332) 00:09:51.286 fused_ordering(333) 00:09:51.286 fused_ordering(334) 00:09:51.286 fused_ordering(335) 00:09:51.286 fused_ordering(336) 00:09:51.286 fused_ordering(337) 00:09:51.286 fused_ordering(338) 00:09:51.286 fused_ordering(339) 00:09:51.286 fused_ordering(340) 00:09:51.286 fused_ordering(341) 00:09:51.286 fused_ordering(342) 00:09:51.286 fused_ordering(343) 00:09:51.286 fused_ordering(344) 00:09:51.286 fused_ordering(345) 00:09:51.286 fused_ordering(346) 00:09:51.286 fused_ordering(347) 00:09:51.286 fused_ordering(348) 00:09:51.286 fused_ordering(349) 00:09:51.286 fused_ordering(350) 00:09:51.286 fused_ordering(351) 00:09:51.286 fused_ordering(352) 00:09:51.286 fused_ordering(353) 00:09:51.286 fused_ordering(354) 00:09:51.286 fused_ordering(355) 00:09:51.286 fused_ordering(356) 00:09:51.286 fused_ordering(357) 00:09:51.286 fused_ordering(358) 00:09:51.286 fused_ordering(359) 00:09:51.286 fused_ordering(360) 00:09:51.286 fused_ordering(361) 00:09:51.286 fused_ordering(362) 00:09:51.286 fused_ordering(363) 00:09:51.286 fused_ordering(364) 00:09:51.286 fused_ordering(365) 00:09:51.286 fused_ordering(366) 00:09:51.286 fused_ordering(367) 00:09:51.286 fused_ordering(368) 00:09:51.286 fused_ordering(369) 00:09:51.286 fused_ordering(370) 00:09:51.286 fused_ordering(371) 00:09:51.286 fused_ordering(372) 00:09:51.286 fused_ordering(373) 00:09:51.286 fused_ordering(374) 00:09:51.286 fused_ordering(375) 00:09:51.286 fused_ordering(376) 00:09:51.286 fused_ordering(377) 00:09:51.286 fused_ordering(378) 00:09:51.286 fused_ordering(379) 00:09:51.286 fused_ordering(380) 00:09:51.286 fused_ordering(381) 00:09:51.286 fused_ordering(382) 00:09:51.286 fused_ordering(383) 00:09:51.286 fused_ordering(384) 00:09:51.286 fused_ordering(385) 00:09:51.286 fused_ordering(386) 00:09:51.286 fused_ordering(387) 00:09:51.286 fused_ordering(388) 00:09:51.286 fused_ordering(389) 00:09:51.286 fused_ordering(390) 00:09:51.286 fused_ordering(391) 00:09:51.286 fused_ordering(392) 00:09:51.286 fused_ordering(393) 00:09:51.286 fused_ordering(394) 00:09:51.286 fused_ordering(395) 00:09:51.286 fused_ordering(396) 00:09:51.286 fused_ordering(397) 00:09:51.286 fused_ordering(398) 00:09:51.286 fused_ordering(399) 00:09:51.286 fused_ordering(400) 00:09:51.286 fused_ordering(401) 00:09:51.286 fused_ordering(402) 00:09:51.286 fused_ordering(403) 00:09:51.286 fused_ordering(404) 00:09:51.286 fused_ordering(405) 00:09:51.286 fused_ordering(406) 00:09:51.286 fused_ordering(407) 00:09:51.286 fused_ordering(408) 00:09:51.286 fused_ordering(409) 00:09:51.286 fused_ordering(410) 00:09:51.545 fused_ordering(411) 00:09:51.545 fused_ordering(412) 00:09:51.545 fused_ordering(413) 00:09:51.545 fused_ordering(414) 00:09:51.545 fused_ordering(415) 00:09:51.545 fused_ordering(416) 00:09:51.545 fused_ordering(417) 00:09:51.545 fused_ordering(418) 00:09:51.545 fused_ordering(419) 00:09:51.545 fused_ordering(420) 00:09:51.545 fused_ordering(421) 00:09:51.545 fused_ordering(422) 00:09:51.545 fused_ordering(423) 00:09:51.545 fused_ordering(424) 00:09:51.545 fused_ordering(425) 00:09:51.545 fused_ordering(426) 00:09:51.545 fused_ordering(427) 00:09:51.545 fused_ordering(428) 00:09:51.545 fused_ordering(429) 00:09:51.545 fused_ordering(430) 00:09:51.545 fused_ordering(431) 00:09:51.545 fused_ordering(432) 00:09:51.545 fused_ordering(433) 00:09:51.545 fused_ordering(434) 00:09:51.545 fused_ordering(435) 00:09:51.545 fused_ordering(436) 00:09:51.545 fused_ordering(437) 00:09:51.545 fused_ordering(438) 00:09:51.545 fused_ordering(439) 00:09:51.545 fused_ordering(440) 00:09:51.545 fused_ordering(441) 00:09:51.545 fused_ordering(442) 00:09:51.545 fused_ordering(443) 00:09:51.545 fused_ordering(444) 00:09:51.545 fused_ordering(445) 00:09:51.545 fused_ordering(446) 00:09:51.545 fused_ordering(447) 00:09:51.545 fused_ordering(448) 00:09:51.545 fused_ordering(449) 00:09:51.545 fused_ordering(450) 00:09:51.545 fused_ordering(451) 00:09:51.545 fused_ordering(452) 00:09:51.545 fused_ordering(453) 00:09:51.545 fused_ordering(454) 00:09:51.545 fused_ordering(455) 00:09:51.545 fused_ordering(456) 00:09:51.545 fused_ordering(457) 00:09:51.545 fused_ordering(458) 00:09:51.545 fused_ordering(459) 00:09:51.545 fused_ordering(460) 00:09:51.545 fused_ordering(461) 00:09:51.545 fused_ordering(462) 00:09:51.545 fused_ordering(463) 00:09:51.545 fused_ordering(464) 00:09:51.545 fused_ordering(465) 00:09:51.545 fused_ordering(466) 00:09:51.545 fused_ordering(467) 00:09:51.545 fused_ordering(468) 00:09:51.545 fused_ordering(469) 00:09:51.545 fused_ordering(470) 00:09:51.545 fused_ordering(471) 00:09:51.545 fused_ordering(472) 00:09:51.545 fused_ordering(473) 00:09:51.545 fused_ordering(474) 00:09:51.545 fused_ordering(475) 00:09:51.545 fused_ordering(476) 00:09:51.545 fused_ordering(477) 00:09:51.545 fused_ordering(478) 00:09:51.545 fused_ordering(479) 00:09:51.545 fused_ordering(480) 00:09:51.545 fused_ordering(481) 00:09:51.545 fused_ordering(482) 00:09:51.545 fused_ordering(483) 00:09:51.545 fused_ordering(484) 00:09:51.545 fused_ordering(485) 00:09:51.545 fused_ordering(486) 00:09:51.545 fused_ordering(487) 00:09:51.545 fused_ordering(488) 00:09:51.545 fused_ordering(489) 00:09:51.545 fused_ordering(490) 00:09:51.545 fused_ordering(491) 00:09:51.545 fused_ordering(492) 00:09:51.545 fused_ordering(493) 00:09:51.545 fused_ordering(494) 00:09:51.545 fused_ordering(495) 00:09:51.545 fused_ordering(496) 00:09:51.545 fused_ordering(497) 00:09:51.545 fused_ordering(498) 00:09:51.545 fused_ordering(499) 00:09:51.545 fused_ordering(500) 00:09:51.545 fused_ordering(501) 00:09:51.545 fused_ordering(502) 00:09:51.545 fused_ordering(503) 00:09:51.545 fused_ordering(504) 00:09:51.545 fused_ordering(505) 00:09:51.545 fused_ordering(506) 00:09:51.545 fused_ordering(507) 00:09:51.545 fused_ordering(508) 00:09:51.545 fused_ordering(509) 00:09:51.545 fused_ordering(510) 00:09:51.545 fused_ordering(511) 00:09:51.545 fused_ordering(512) 00:09:51.545 fused_ordering(513) 00:09:51.545 fused_ordering(514) 00:09:51.545 fused_ordering(515) 00:09:51.545 fused_ordering(516) 00:09:51.545 fused_ordering(517) 00:09:51.545 fused_ordering(518) 00:09:51.545 fused_ordering(519) 00:09:51.546 fused_ordering(520) 00:09:51.546 fused_ordering(521) 00:09:51.546 fused_ordering(522) 00:09:51.546 fused_ordering(523) 00:09:51.546 fused_ordering(524) 00:09:51.546 fused_ordering(525) 00:09:51.546 fused_ordering(526) 00:09:51.546 fused_ordering(527) 00:09:51.546 fused_ordering(528) 00:09:51.546 fused_ordering(529) 00:09:51.546 fused_ordering(530) 00:09:51.546 fused_ordering(531) 00:09:51.546 fused_ordering(532) 00:09:51.546 fused_ordering(533) 00:09:51.546 fused_ordering(534) 00:09:51.546 fused_ordering(535) 00:09:51.546 fused_ordering(536) 00:09:51.546 fused_ordering(537) 00:09:51.546 fused_ordering(538) 00:09:51.546 fused_ordering(539) 00:09:51.546 fused_ordering(540) 00:09:51.546 fused_ordering(541) 00:09:51.546 fused_ordering(542) 00:09:51.546 fused_ordering(543) 00:09:51.546 fused_ordering(544) 00:09:51.546 fused_ordering(545) 00:09:51.546 fused_ordering(546) 00:09:51.546 fused_ordering(547) 00:09:51.546 fused_ordering(548) 00:09:51.546 fused_ordering(549) 00:09:51.546 fused_ordering(550) 00:09:51.546 fused_ordering(551) 00:09:51.546 fused_ordering(552) 00:09:51.546 fused_ordering(553) 00:09:51.546 fused_ordering(554) 00:09:51.546 fused_ordering(555) 00:09:51.546 fused_ordering(556) 00:09:51.546 fused_ordering(557) 00:09:51.546 fused_ordering(558) 00:09:51.546 fused_ordering(559) 00:09:51.546 fused_ordering(560) 00:09:51.546 fused_ordering(561) 00:09:51.546 fused_ordering(562) 00:09:51.546 fused_ordering(563) 00:09:51.546 fused_ordering(564) 00:09:51.546 fused_ordering(565) 00:09:51.546 fused_ordering(566) 00:09:51.546 fused_ordering(567) 00:09:51.546 fused_ordering(568) 00:09:51.546 fused_ordering(569) 00:09:51.546 fused_ordering(570) 00:09:51.546 fused_ordering(571) 00:09:51.546 fused_ordering(572) 00:09:51.546 fused_ordering(573) 00:09:51.546 fused_ordering(574) 00:09:51.546 fused_ordering(575) 00:09:51.546 fused_ordering(576) 00:09:51.546 fused_ordering(577) 00:09:51.546 fused_ordering(578) 00:09:51.546 fused_ordering(579) 00:09:51.546 fused_ordering(580) 00:09:51.546 fused_ordering(581) 00:09:51.546 fused_ordering(582) 00:09:51.546 fused_ordering(583) 00:09:51.546 fused_ordering(584) 00:09:51.546 fused_ordering(585) 00:09:51.546 fused_ordering(586) 00:09:51.546 fused_ordering(587) 00:09:51.546 fused_ordering(588) 00:09:51.546 fused_ordering(589) 00:09:51.546 fused_ordering(590) 00:09:51.546 fused_ordering(591) 00:09:51.546 fused_ordering(592) 00:09:51.546 fused_ordering(593) 00:09:51.546 fused_ordering(594) 00:09:51.546 fused_ordering(595) 00:09:51.546 fused_ordering(596) 00:09:51.546 fused_ordering(597) 00:09:51.546 fused_ordering(598) 00:09:51.546 fused_ordering(599) 00:09:51.546 fused_ordering(600) 00:09:51.546 fused_ordering(601) 00:09:51.546 fused_ordering(602) 00:09:51.546 fused_ordering(603) 00:09:51.546 fused_ordering(604) 00:09:51.546 fused_ordering(605) 00:09:51.546 fused_ordering(606) 00:09:51.546 fused_ordering(607) 00:09:51.546 fused_ordering(608) 00:09:51.546 fused_ordering(609) 00:09:51.546 fused_ordering(610) 00:09:51.546 fused_ordering(611) 00:09:51.546 fused_ordering(612) 00:09:51.546 fused_ordering(613) 00:09:51.546 fused_ordering(614) 00:09:51.546 fused_ordering(615) 00:09:52.112 fused_ordering(616) 00:09:52.112 fused_ordering(617) 00:09:52.112 fused_ordering(618) 00:09:52.112 fused_ordering(619) 00:09:52.112 fused_ordering(620) 00:09:52.112 fused_ordering(621) 00:09:52.112 fused_ordering(622) 00:09:52.112 fused_ordering(623) 00:09:52.112 fused_ordering(624) 00:09:52.112 fused_ordering(625) 00:09:52.112 fused_ordering(626) 00:09:52.112 fused_ordering(627) 00:09:52.112 fused_ordering(628) 00:09:52.112 fused_ordering(629) 00:09:52.112 fused_ordering(630) 00:09:52.112 fused_ordering(631) 00:09:52.112 fused_ordering(632) 00:09:52.112 fused_ordering(633) 00:09:52.112 fused_ordering(634) 00:09:52.112 fused_ordering(635) 00:09:52.112 fused_ordering(636) 00:09:52.112 fused_ordering(637) 00:09:52.112 fused_ordering(638) 00:09:52.112 fused_ordering(639) 00:09:52.112 fused_ordering(640) 00:09:52.112 fused_ordering(641) 00:09:52.112 fused_ordering(642) 00:09:52.112 fused_ordering(643) 00:09:52.112 fused_ordering(644) 00:09:52.112 fused_ordering(645) 00:09:52.112 fused_ordering(646) 00:09:52.112 fused_ordering(647) 00:09:52.112 fused_ordering(648) 00:09:52.112 fused_ordering(649) 00:09:52.112 fused_ordering(650) 00:09:52.112 fused_ordering(651) 00:09:52.112 fused_ordering(652) 00:09:52.112 fused_ordering(653) 00:09:52.112 fused_ordering(654) 00:09:52.112 fused_ordering(655) 00:09:52.112 fused_ordering(656) 00:09:52.112 fused_ordering(657) 00:09:52.112 fused_ordering(658) 00:09:52.112 fused_ordering(659) 00:09:52.112 fused_ordering(660) 00:09:52.112 fused_ordering(661) 00:09:52.112 fused_ordering(662) 00:09:52.112 fused_ordering(663) 00:09:52.112 fused_ordering(664) 00:09:52.112 fused_ordering(665) 00:09:52.112 fused_ordering(666) 00:09:52.112 fused_ordering(667) 00:09:52.112 fused_ordering(668) 00:09:52.112 fused_ordering(669) 00:09:52.112 fused_ordering(670) 00:09:52.112 fused_ordering(671) 00:09:52.112 fused_ordering(672) 00:09:52.112 fused_ordering(673) 00:09:52.112 fused_ordering(674) 00:09:52.112 fused_ordering(675) 00:09:52.112 fused_ordering(676) 00:09:52.112 fused_ordering(677) 00:09:52.112 fused_ordering(678) 00:09:52.112 fused_ordering(679) 00:09:52.112 fused_ordering(680) 00:09:52.112 fused_ordering(681) 00:09:52.112 fused_ordering(682) 00:09:52.112 fused_ordering(683) 00:09:52.112 fused_ordering(684) 00:09:52.112 fused_ordering(685) 00:09:52.112 fused_ordering(686) 00:09:52.112 fused_ordering(687) 00:09:52.112 fused_ordering(688) 00:09:52.112 fused_ordering(689) 00:09:52.112 fused_ordering(690) 00:09:52.112 fused_ordering(691) 00:09:52.112 fused_ordering(692) 00:09:52.112 fused_ordering(693) 00:09:52.112 fused_ordering(694) 00:09:52.112 fused_ordering(695) 00:09:52.112 fused_ordering(696) 00:09:52.112 fused_ordering(697) 00:09:52.112 fused_ordering(698) 00:09:52.112 fused_ordering(699) 00:09:52.112 fused_ordering(700) 00:09:52.112 fused_ordering(701) 00:09:52.112 fused_ordering(702) 00:09:52.112 fused_ordering(703) 00:09:52.112 fused_ordering(704) 00:09:52.112 fused_ordering(705) 00:09:52.112 fused_ordering(706) 00:09:52.112 fused_ordering(707) 00:09:52.112 fused_ordering(708) 00:09:52.112 fused_ordering(709) 00:09:52.112 fused_ordering(710) 00:09:52.112 fused_ordering(711) 00:09:52.112 fused_ordering(712) 00:09:52.112 fused_ordering(713) 00:09:52.112 fused_ordering(714) 00:09:52.112 fused_ordering(715) 00:09:52.112 fused_ordering(716) 00:09:52.112 fused_ordering(717) 00:09:52.112 fused_ordering(718) 00:09:52.112 fused_ordering(719) 00:09:52.112 fused_ordering(720) 00:09:52.112 fused_ordering(721) 00:09:52.112 fused_ordering(722) 00:09:52.112 fused_ordering(723) 00:09:52.112 fused_ordering(724) 00:09:52.112 fused_ordering(725) 00:09:52.112 fused_ordering(726) 00:09:52.112 fused_ordering(727) 00:09:52.112 fused_ordering(728) 00:09:52.112 fused_ordering(729) 00:09:52.112 fused_ordering(730) 00:09:52.112 fused_ordering(731) 00:09:52.112 fused_ordering(732) 00:09:52.112 fused_ordering(733) 00:09:52.112 fused_ordering(734) 00:09:52.112 fused_ordering(735) 00:09:52.112 fused_ordering(736) 00:09:52.112 fused_ordering(737) 00:09:52.112 fused_ordering(738) 00:09:52.112 fused_ordering(739) 00:09:52.112 fused_ordering(740) 00:09:52.112 fused_ordering(741) 00:09:52.112 fused_ordering(742) 00:09:52.112 fused_ordering(743) 00:09:52.112 fused_ordering(744) 00:09:52.112 fused_ordering(745) 00:09:52.112 fused_ordering(746) 00:09:52.112 fused_ordering(747) 00:09:52.112 fused_ordering(748) 00:09:52.112 fused_ordering(749) 00:09:52.112 fused_ordering(750) 00:09:52.112 fused_ordering(751) 00:09:52.112 fused_ordering(752) 00:09:52.112 fused_ordering(753) 00:09:52.112 fused_ordering(754) 00:09:52.112 fused_ordering(755) 00:09:52.112 fused_ordering(756) 00:09:52.112 fused_ordering(757) 00:09:52.112 fused_ordering(758) 00:09:52.112 fused_ordering(759) 00:09:52.112 fused_ordering(760) 00:09:52.112 fused_ordering(761) 00:09:52.112 fused_ordering(762) 00:09:52.112 fused_ordering(763) 00:09:52.112 fused_ordering(764) 00:09:52.112 fused_ordering(765) 00:09:52.112 fused_ordering(766) 00:09:52.112 fused_ordering(767) 00:09:52.112 fused_ordering(768) 00:09:52.112 fused_ordering(769) 00:09:52.112 fused_ordering(770) 00:09:52.112 fused_ordering(771) 00:09:52.112 fused_ordering(772) 00:09:52.112 fused_ordering(773) 00:09:52.112 fused_ordering(774) 00:09:52.112 fused_ordering(775) 00:09:52.112 fused_ordering(776) 00:09:52.112 fused_ordering(777) 00:09:52.112 fused_ordering(778) 00:09:52.112 fused_ordering(779) 00:09:52.112 fused_ordering(780) 00:09:52.112 fused_ordering(781) 00:09:52.112 fused_ordering(782) 00:09:52.112 fused_ordering(783) 00:09:52.112 fused_ordering(784) 00:09:52.112 fused_ordering(785) 00:09:52.112 fused_ordering(786) 00:09:52.112 fused_ordering(787) 00:09:52.112 fused_ordering(788) 00:09:52.112 fused_ordering(789) 00:09:52.112 fused_ordering(790) 00:09:52.112 fused_ordering(791) 00:09:52.112 fused_ordering(792) 00:09:52.112 fused_ordering(793) 00:09:52.112 fused_ordering(794) 00:09:52.112 fused_ordering(795) 00:09:52.112 fused_ordering(796) 00:09:52.112 fused_ordering(797) 00:09:52.112 fused_ordering(798) 00:09:52.112 fused_ordering(799) 00:09:52.112 fused_ordering(800) 00:09:52.112 fused_ordering(801) 00:09:52.112 fused_ordering(802) 00:09:52.112 fused_ordering(803) 00:09:52.112 fused_ordering(804) 00:09:52.112 fused_ordering(805) 00:09:52.112 fused_ordering(806) 00:09:52.112 fused_ordering(807) 00:09:52.112 fused_ordering(808) 00:09:52.112 fused_ordering(809) 00:09:52.112 fused_ordering(810) 00:09:52.112 fused_ordering(811) 00:09:52.112 fused_ordering(812) 00:09:52.112 fused_ordering(813) 00:09:52.112 fused_ordering(814) 00:09:52.112 fused_ordering(815) 00:09:52.112 fused_ordering(816) 00:09:52.112 fused_ordering(817) 00:09:52.112 fused_ordering(818) 00:09:52.112 fused_ordering(819) 00:09:52.112 fused_ordering(820) 00:09:52.679 fused_ordering(821) 00:09:52.679 fused_ordering(822) 00:09:52.679 fused_ordering(823) 00:09:52.679 fused_ordering(824) 00:09:52.679 fused_ordering(825) 00:09:52.679 fused_ordering(826) 00:09:52.679 fused_ordering(827) 00:09:52.679 fused_ordering(828) 00:09:52.679 fused_ordering(829) 00:09:52.679 fused_ordering(830) 00:09:52.679 fused_ordering(831) 00:09:52.679 fused_ordering(832) 00:09:52.679 fused_ordering(833) 00:09:52.679 fused_ordering(834) 00:09:52.679 fused_ordering(835) 00:09:52.679 fused_ordering(836) 00:09:52.679 fused_ordering(837) 00:09:52.679 fused_ordering(838) 00:09:52.679 fused_ordering(839) 00:09:52.679 fused_ordering(840) 00:09:52.679 fused_ordering(841) 00:09:52.679 fused_ordering(842) 00:09:52.679 fused_ordering(843) 00:09:52.679 fused_ordering(844) 00:09:52.679 fused_ordering(845) 00:09:52.679 fused_ordering(846) 00:09:52.679 fused_ordering(847) 00:09:52.679 fused_ordering(848) 00:09:52.679 fused_ordering(849) 00:09:52.679 fused_ordering(850) 00:09:52.679 fused_ordering(851) 00:09:52.679 fused_ordering(852) 00:09:52.679 fused_ordering(853) 00:09:52.679 fused_ordering(854) 00:09:52.679 fused_ordering(855) 00:09:52.679 fused_ordering(856) 00:09:52.679 fused_ordering(857) 00:09:52.679 fused_ordering(858) 00:09:52.679 fused_ordering(859) 00:09:52.679 fused_ordering(860) 00:09:52.679 fused_ordering(861) 00:09:52.679 fused_ordering(862) 00:09:52.679 fused_ordering(863) 00:09:52.679 fused_ordering(864) 00:09:52.679 fused_ordering(865) 00:09:52.679 fused_ordering(866) 00:09:52.679 fused_ordering(867) 00:09:52.679 fused_ordering(868) 00:09:52.679 fused_ordering(869) 00:09:52.679 fused_ordering(870) 00:09:52.679 fused_ordering(871) 00:09:52.679 fused_ordering(872) 00:09:52.679 fused_ordering(873) 00:09:52.679 fused_ordering(874) 00:09:52.679 fused_ordering(875) 00:09:52.679 fused_ordering(876) 00:09:52.679 fused_ordering(877) 00:09:52.679 fused_ordering(878) 00:09:52.679 fused_ordering(879) 00:09:52.679 fused_ordering(880) 00:09:52.679 fused_ordering(881) 00:09:52.679 fused_ordering(882) 00:09:52.679 fused_ordering(883) 00:09:52.679 fused_ordering(884) 00:09:52.679 fused_ordering(885) 00:09:52.679 fused_ordering(886) 00:09:52.679 fused_ordering(887) 00:09:52.679 fused_ordering(888) 00:09:52.679 fused_ordering(889) 00:09:52.679 fused_ordering(890) 00:09:52.679 fused_ordering(891) 00:09:52.679 fused_ordering(892) 00:09:52.679 fused_ordering(893) 00:09:52.679 fused_ordering(894) 00:09:52.679 fused_ordering(895) 00:09:52.679 fused_ordering(896) 00:09:52.679 fused_ordering(897) 00:09:52.679 fused_ordering(898) 00:09:52.679 fused_ordering(899) 00:09:52.679 fused_ordering(900) 00:09:52.679 fused_ordering(901) 00:09:52.679 fused_ordering(902) 00:09:52.679 fused_ordering(903) 00:09:52.679 fused_ordering(904) 00:09:52.679 fused_ordering(905) 00:09:52.679 fused_ordering(906) 00:09:52.679 fused_ordering(907) 00:09:52.679 fused_ordering(908) 00:09:52.679 fused_ordering(909) 00:09:52.679 fused_ordering(910) 00:09:52.679 fused_ordering(911) 00:09:52.679 fused_ordering(912) 00:09:52.679 fused_ordering(913) 00:09:52.679 fused_ordering(914) 00:09:52.679 fused_ordering(915) 00:09:52.679 fused_ordering(916) 00:09:52.679 fused_ordering(917) 00:09:52.679 fused_ordering(918) 00:09:52.679 fused_ordering(919) 00:09:52.679 fused_ordering(920) 00:09:52.679 fused_ordering(921) 00:09:52.679 fused_ordering(922) 00:09:52.679 fused_ordering(923) 00:09:52.679 fused_ordering(924) 00:09:52.679 fused_ordering(925) 00:09:52.679 fused_ordering(926) 00:09:52.679 fused_ordering(927) 00:09:52.679 fused_ordering(928) 00:09:52.679 fused_ordering(929) 00:09:52.680 fused_ordering(930) 00:09:52.680 fused_ordering(931) 00:09:52.680 fused_ordering(932) 00:09:52.680 fused_ordering(933) 00:09:52.680 fused_ordering(934) 00:09:52.680 fused_ordering(935) 00:09:52.680 fused_ordering(936) 00:09:52.680 fused_ordering(937) 00:09:52.680 fused_ordering(938) 00:09:52.680 fused_ordering(939) 00:09:52.680 fused_ordering(940) 00:09:52.680 fused_ordering(941) 00:09:52.680 fused_ordering(942) 00:09:52.680 fused_ordering(943) 00:09:52.680 fused_ordering(944) 00:09:52.680 fused_ordering(945) 00:09:52.680 fused_ordering(946) 00:09:52.680 fused_ordering(947) 00:09:52.680 fused_ordering(948) 00:09:52.680 fused_ordering(949) 00:09:52.680 fused_ordering(950) 00:09:52.680 fused_ordering(951) 00:09:52.680 fused_ordering(952) 00:09:52.680 fused_ordering(953) 00:09:52.680 fused_ordering(954) 00:09:52.680 fused_ordering(955) 00:09:52.680 fused_ordering(956) 00:09:52.680 fused_ordering(957) 00:09:52.680 fused_ordering(958) 00:09:52.680 fused_ordering(959) 00:09:52.680 fused_ordering(960) 00:09:52.680 fused_ordering(961) 00:09:52.680 fused_ordering(962) 00:09:52.680 fused_ordering(963) 00:09:52.680 fused_ordering(964) 00:09:52.680 fused_ordering(965) 00:09:52.680 fused_ordering(966) 00:09:52.680 fused_ordering(967) 00:09:52.680 fused_ordering(968) 00:09:52.680 fused_ordering(969) 00:09:52.680 fused_ordering(970) 00:09:52.680 fused_ordering(971) 00:09:52.680 fused_ordering(972) 00:09:52.680 fused_ordering(973) 00:09:52.680 fused_ordering(974) 00:09:52.680 fused_ordering(975) 00:09:52.680 fused_ordering(976) 00:09:52.680 fused_ordering(977) 00:09:52.680 fused_ordering(978) 00:09:52.680 fused_ordering(979) 00:09:52.680 fused_ordering(980) 00:09:52.680 fused_ordering(981) 00:09:52.680 fused_ordering(982) 00:09:52.680 fused_ordering(983) 00:09:52.680 fused_ordering(984) 00:09:52.680 fused_ordering(985) 00:09:52.680 fused_ordering(986) 00:09:52.680 fused_ordering(987) 00:09:52.680 fused_ordering(988) 00:09:52.680 fused_ordering(989) 00:09:52.680 fused_ordering(990) 00:09:52.680 fused_ordering(991) 00:09:52.680 fused_ordering(992) 00:09:52.680 fused_ordering(993) 00:09:52.680 fused_ordering(994) 00:09:52.680 fused_ordering(995) 00:09:52.680 fused_ordering(996) 00:09:52.680 fused_ordering(997) 00:09:52.680 fused_ordering(998) 00:09:52.680 fused_ordering(999) 00:09:52.680 fused_ordering(1000) 00:09:52.680 fused_ordering(1001) 00:09:52.680 fused_ordering(1002) 00:09:52.680 fused_ordering(1003) 00:09:52.680 fused_ordering(1004) 00:09:52.680 fused_ordering(1005) 00:09:52.680 fused_ordering(1006) 00:09:52.680 fused_ordering(1007) 00:09:52.680 fused_ordering(1008) 00:09:52.680 fused_ordering(1009) 00:09:52.680 fused_ordering(1010) 00:09:52.680 fused_ordering(1011) 00:09:52.680 fused_ordering(1012) 00:09:52.680 fused_ordering(1013) 00:09:52.680 fused_ordering(1014) 00:09:52.680 fused_ordering(1015) 00:09:52.680 fused_ordering(1016) 00:09:52.680 fused_ordering(1017) 00:09:52.680 fused_ordering(1018) 00:09:52.680 fused_ordering(1019) 00:09:52.680 fused_ordering(1020) 00:09:52.680 fused_ordering(1021) 00:09:52.680 fused_ordering(1022) 00:09:52.680 fused_ordering(1023) 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.680 rmmod nvme_tcp 00:09:52.680 rmmod nvme_fabrics 00:09:52.680 rmmod nvme_keyring 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 4178379 ']' 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 4178379 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 4178379 ']' 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 4178379 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4178379 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4178379' 00:09:52.680 killing process with pid 4178379 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 4178379 00:09:52.680 16:51:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 4178379 00:09:52.938 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:52.938 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:52.938 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:52.938 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.938 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:52.938 16:51:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.938 16:51:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.938 16:51:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.838 16:52:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:54.838 00:09:54.838 real 0m10.848s 00:09:54.838 user 0m5.433s 00:09:54.838 sys 0m5.757s 00:09:54.838 16:52:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:54.838 16:52:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:54.838 ************************************ 00:09:54.838 END TEST nvmf_fused_ordering 00:09:54.838 ************************************ 00:09:54.838 16:52:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:54.838 16:52:01 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:54.838 16:52:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:54.838 16:52:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.838 16:52:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:55.096 ************************************ 00:09:55.096 START TEST nvmf_delete_subsystem 00:09:55.096 ************************************ 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:55.096 * Looking for test storage... 00:09:55.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.096 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:55.097 16:52:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:00.369 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:00.369 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:00.369 Found net devices under 0000:86:00.0: cvl_0_0 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:00.369 Found net devices under 0000:86:00.1: cvl_0_1 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.369 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:00.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:10:00.370 00:10:00.370 --- 10.0.0.2 ping statistics --- 00:10:00.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.370 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:10:00.370 00:10:00.370 --- 10.0.0.1 ping statistics --- 00:10:00.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.370 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=4182371 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 4182371 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 4182371 ']' 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.370 16:52:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.370 [2024-07-15 16:52:06.916189] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:10:00.370 [2024-07-15 16:52:06.916238] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.370 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.370 [2024-07-15 16:52:06.969063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:00.629 [2024-07-15 16:52:07.049930] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.629 [2024-07-15 16:52:07.049962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.629 [2024-07-15 16:52:07.049969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.629 [2024-07-15 16:52:07.049975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.629 [2024-07-15 16:52:07.049979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.629 [2024-07-15 16:52:07.050011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.629 [2024-07-15 16:52:07.050014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.196 [2024-07-15 16:52:07.781414] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.196 [2024-07-15 16:52:07.797564] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.196 NULL1 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.196 Delay0 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4182420 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:01.196 16:52:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:01.196 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.454 [2024-07-15 16:52:07.872071] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:03.359 16:52:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.359 16:52:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.359 16:52:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 [2024-07-15 16:52:10.032455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb15c0 is same with the state(5) to be set 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 starting I/O failed: -6 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Read completed with error (sct=0, sc=8) 00:10:03.618 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 starting I/O failed: -6 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 starting I/O failed: -6 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 starting I/O failed: -6 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 starting I/O failed: -6 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 starting I/O failed: -6 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 starting I/O failed: -6 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 starting I/O failed: -6 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 starting I/O failed: -6 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 starting I/O failed: -6 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 [2024-07-15 16:52:10.032811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4600000c00 is same with the state(5) to be set 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Write completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:03.619 Read completed with error (sct=0, sc=8) 00:10:04.555 [2024-07-15 16:52:11.008890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb2ac0 is same with the state(5) to be set 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Write completed with error (sct=0, sc=8) 00:10:04.555 Write completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Write completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Write completed with error (sct=0, sc=8) 00:10:04.555 Write completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Write completed with error (sct=0, sc=8) 00:10:04.555 Write completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Write completed with error (sct=0, sc=8) 00:10:04.555 Write completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.555 Write completed with error (sct=0, sc=8) 00:10:04.555 Write completed with error (sct=0, sc=8) 00:10:04.555 Write completed with error (sct=0, sc=8) 00:10:04.555 Write completed with error (sct=0, sc=8) 00:10:04.555 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 [2024-07-15 16:52:11.034773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb1000 is same with the state(5) to be set 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 [2024-07-15 16:52:11.034935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb17a0 is same with the state(5) to be set 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 [2024-07-15 16:52:11.035096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb13e0 is same with the state(5) to be set 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Read completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 Write completed with error (sct=0, sc=8) 00:10:04.556 [2024-07-15 16:52:11.035192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f460000d2f0 is same with the state(5) to be set 00:10:04.556 Initializing NVMe Controllers 00:10:04.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:04.556 Controller IO queue size 128, less than required. 00:10:04.556 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:04.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:04.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:04.556 Initialization complete. Launching workers. 00:10:04.556 ======================================================== 00:10:04.556 Latency(us) 00:10:04.556 Device Information : IOPS MiB/s Average min max 00:10:04.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 193.18 0.09 945305.31 1361.35 1011344.16 00:10:04.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.92 0.08 867201.26 238.37 1011106.13 00:10:04.556 ======================================================== 00:10:04.556 Total : 351.11 0.17 910175.06 238.37 1011344.16 00:10:04.556 00:10:04.556 [2024-07-15 16:52:11.035992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb2ac0 (9): Bad file descriptor 00:10:04.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:04.556 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.556 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:04.556 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4182420 00:10:04.556 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4182420 00:10:05.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4182420) - No such process 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4182420 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 4182420 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 4182420 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:05.123 [2024-07-15 16:52:11.568984] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4183095 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4183095 00:10:05.123 16:52:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:05.123 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.123 [2024-07-15 16:52:11.634131] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:05.690 16:52:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.690 16:52:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4183095 00:10:05.690 16:52:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:05.949 16:52:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.949 16:52:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4183095 00:10:05.949 16:52:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:06.517 16:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:06.517 16:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4183095 00:10:06.517 16:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:07.084 16:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:07.084 16:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4183095 00:10:07.084 16:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:07.679 16:52:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:07.679 16:52:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4183095 00:10:07.679 16:52:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:08.244 16:52:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:08.244 16:52:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4183095 00:10:08.244 16:52:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:08.244 Initializing NVMe Controllers 00:10:08.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:08.244 Controller IO queue size 128, less than required. 00:10:08.244 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:08.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:08.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:08.244 Initialization complete. Launching workers. 00:10:08.244 ======================================================== 00:10:08.244 Latency(us) 00:10:08.244 Device Information : IOPS MiB/s Average min max 00:10:08.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002757.61 1000176.21 1009583.26 00:10:08.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004634.38 1000301.20 1012141.34 00:10:08.244 ======================================================== 00:10:08.244 Total : 256.00 0.12 1003696.00 1000176.21 1012141.34 00:10:08.244 00:10:08.499 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:08.499 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4183095 00:10:08.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4183095) - No such process 00:10:08.499 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4183095 00:10:08.499 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:08.499 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:08.499 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:08.499 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:08.499 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:08.499 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:08.499 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:08.499 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:08.499 rmmod nvme_tcp 00:10:08.499 rmmod nvme_fabrics 00:10:08.499 rmmod nvme_keyring 00:10:08.499 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 4182371 ']' 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 4182371 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 4182371 ']' 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 4182371 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4182371 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4182371' 00:10:08.757 killing process with pid 4182371 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 4182371 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 4182371 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:08.757 16:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.357 16:52:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.357 00:10:11.357 real 0m15.945s 00:10:11.357 user 0m30.295s 00:10:11.357 sys 0m4.745s 00:10:11.357 16:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.357 16:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.357 ************************************ 00:10:11.357 END TEST nvmf_delete_subsystem 00:10:11.357 ************************************ 00:10:11.357 16:52:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:11.357 16:52:17 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:11.357 16:52:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:11.357 16:52:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.357 16:52:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:11.357 ************************************ 00:10:11.357 START TEST nvmf_ns_masking 00:10:11.357 ************************************ 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:11.357 * Looking for test storage... 00:10:11.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.357 16:52:17 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=b751b0d4-c1b3-4f94-ae64-8baf8ea9edd0 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=756913de-e4d7-4566-b467-a3b476a3892b 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d7c2fbb6-53ad-4742-aced-abab5669cf8d 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:11.358 16:52:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:16.626 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:16.626 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:16.626 Found net devices under 0000:86:00.0: cvl_0_0 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:16.626 Found net devices under 0000:86:00.1: cvl_0_1 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.626 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:16.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:10:16.626 00:10:16.627 --- 10.0.0.2 ping statistics --- 00:10:16.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.627 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:10:16.627 00:10:16.627 --- 10.0.0.1 ping statistics --- 00:10:16.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.627 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=4187092 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 4187092 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4187092 ']' 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.627 16:52:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:16.627 [2024-07-15 16:52:22.978185] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:10:16.627 [2024-07-15 16:52:22.978249] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.627 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.627 [2024-07-15 16:52:23.035345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.627 [2024-07-15 16:52:23.113417] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.627 [2024-07-15 16:52:23.113454] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.627 [2024-07-15 16:52:23.113461] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.627 [2024-07-15 16:52:23.113467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.627 [2024-07-15 16:52:23.113472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.627 [2024-07-15 16:52:23.113488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.193 16:52:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:17.193 16:52:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:17.193 16:52:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:17.193 16:52:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:17.193 16:52:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:17.193 16:52:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.193 16:52:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:17.452 [2024-07-15 16:52:23.961237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.452 16:52:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:10:17.452 16:52:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:10:17.452 16:52:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:17.711 Malloc1 00:10:17.711 16:52:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:17.711 Malloc2 00:10:17.711 16:52:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:17.969 16:52:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:18.228 16:52:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.228 [2024-07-15 16:52:24.869301] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.228 16:52:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:10:18.228 16:52:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d7c2fbb6-53ad-4742-aced-abab5669cf8d -a 10.0.0.2 -s 4420 -i 4 00:10:18.486 16:52:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.486 16:52:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:18.486 16:52:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.486 16:52:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:18.486 16:52:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:20.385 16:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:20.385 16:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:20.385 16:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.385 16:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:20.385 16:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.385 16:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:20.385 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:20.385 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:20.642 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:20.642 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:20.642 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:10:20.642 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:20.642 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:20.642 [ 0]:0x1 00:10:20.642 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:20.642 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:20.642 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=567a372d1f6b4685bdcda5749e132441 00:10:20.642 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 567a372d1f6b4685bdcda5749e132441 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:20.642 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:20.642 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:10:20.642 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:20.642 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:20.929 [ 0]:0x1 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=567a372d1f6b4685bdcda5749e132441 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 567a372d1f6b4685bdcda5749e132441 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:20.929 [ 1]:0x2 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b9c9a2285c84b8e98d847d09964c035 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b9c9a2285c84b8e98d847d09964c035 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.929 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.186 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:21.186 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:10:21.186 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d7c2fbb6-53ad-4742-aced-abab5669cf8d -a 10.0.0.2 -s 4420 -i 4 00:10:21.444 16:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:21.444 16:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:21.444 16:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:21.444 16:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:10:21.444 16:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:10:21.444 16:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:23.344 16:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:23.344 16:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:23.344 16:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:23.344 16:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:23.344 16:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:23.344 16:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:23.344 16:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:23.344 16:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:23.603 [ 0]:0x2 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b9c9a2285c84b8e98d847d09964c035 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b9c9a2285c84b8e98d847d09964c035 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:23.603 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:23.862 [ 0]:0x1 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=567a372d1f6b4685bdcda5749e132441 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 567a372d1f6b4685bdcda5749e132441 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:23.862 [ 1]:0x2 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b9c9a2285c84b8e98d847d09964c035 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b9c9a2285c84b8e98d847d09964c035 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:23.862 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:24.121 [ 0]:0x2 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b9c9a2285c84b8e98d847d09964c035 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b9c9a2285c84b8e98d847d09964c035 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:10:24.121 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.380 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:24.380 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:10:24.380 16:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d7c2fbb6-53ad-4742-aced-abab5669cf8d -a 10.0.0.2 -s 4420 -i 4 00:10:24.639 16:52:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:24.639 16:52:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:24.639 16:52:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.639 16:52:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:24.639 16:52:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:24.639 16:52:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:26.545 [ 0]:0x1 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:26.545 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:26.802 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=567a372d1f6b4685bdcda5749e132441 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 567a372d1f6b4685bdcda5749e132441 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:26.803 [ 1]:0x2 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b9c9a2285c84b8e98d847d09964c035 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b9c9a2285c84b8e98d847d09964c035 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:26.803 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:27.061 [ 0]:0x2 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b9c9a2285c84b8e98d847d09964c035 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b9c9a2285c84b8e98d847d09964c035 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:27.061 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:27.320 [2024-07-15 16:52:33.771466] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:27.320 request: 00:10:27.320 { 00:10:27.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:27.320 "nsid": 2, 00:10:27.320 "host": "nqn.2016-06.io.spdk:host1", 00:10:27.320 "method": "nvmf_ns_remove_host", 00:10:27.320 "req_id": 1 00:10:27.320 } 00:10:27.320 Got JSON-RPC error response 00:10:27.320 response: 00:10:27.320 { 00:10:27.320 "code": -32602, 00:10:27.320 "message": "Invalid parameters" 00:10:27.320 } 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:27.320 [ 0]:0x2 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:27.320 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:27.321 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b9c9a2285c84b8e98d847d09964c035 00:10:27.321 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b9c9a2285c84b8e98d847d09964c035 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:27.321 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:27.321 16:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.580 16:52:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=4189097 00:10:27.580 16:52:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:27.580 16:52:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.580 16:52:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 4189097 /var/tmp/host.sock 00:10:27.580 16:52:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4189097 ']' 00:10:27.580 16:52:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:27.580 16:52:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:27.580 16:52:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:27.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:27.580 16:52:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:27.580 16:52:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:27.580 [2024-07-15 16:52:34.127048] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:10:27.580 [2024-07-15 16:52:34.127095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4189097 ] 00:10:27.580 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.580 [2024-07-15 16:52:34.181141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.839 [2024-07-15 16:52:34.257113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.407 16:52:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:28.407 16:52:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:28.407 16:52:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.666 16:52:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:28.666 16:52:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid b751b0d4-c1b3-4f94-ae64-8baf8ea9edd0 00:10:28.666 16:52:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:28.666 16:52:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B751B0D4C1B34F94AE648BAF8EA9EDD0 -i 00:10:28.924 16:52:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 756913de-e4d7-4566-b467-a3b476a3892b 00:10:28.924 16:52:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:28.924 16:52:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 756913DEE4D74566B467A3B476A3892B -i 00:10:29.183 16:52:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:29.183 16:52:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:29.441 16:52:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:29.441 16:52:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:29.700 nvme0n1 00:10:29.700 16:52:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:29.700 16:52:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:30.267 nvme1n2 00:10:30.267 16:52:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:30.267 16:52:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:30.267 16:52:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:30.267 16:52:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:30.267 16:52:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:30.267 16:52:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:30.267 16:52:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:30.267 16:52:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:30.268 16:52:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:30.526 16:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ b751b0d4-c1b3-4f94-ae64-8baf8ea9edd0 == \b\7\5\1\b\0\d\4\-\c\1\b\3\-\4\f\9\4\-\a\e\6\4\-\8\b\a\f\8\e\a\9\e\d\d\0 ]] 00:10:30.526 16:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:30.526 16:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:30.526 16:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:30.785 16:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 756913de-e4d7-4566-b467-a3b476a3892b == \7\5\6\9\1\3\d\e\-\e\4\d\7\-\4\5\6\6\-\b\4\6\7\-\a\3\b\4\7\6\a\3\8\9\2\b ]] 00:10:30.785 16:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 4189097 00:10:30.785 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4189097 ']' 00:10:30.785 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4189097 00:10:30.785 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:30.785 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:30.785 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4189097 00:10:30.785 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:30.785 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:30.785 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4189097' 00:10:30.785 killing process with pid 4189097 00:10:30.785 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4189097 00:10:30.785 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4189097 00:10:31.044 16:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:31.303 rmmod nvme_tcp 00:10:31.303 rmmod nvme_fabrics 00:10:31.303 rmmod nvme_keyring 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 4187092 ']' 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 4187092 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4187092 ']' 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4187092 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4187092 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4187092' 00:10:31.303 killing process with pid 4187092 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4187092 00:10:31.303 16:52:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4187092 00:10:31.561 16:52:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:31.561 16:52:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:31.561 16:52:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:31.561 16:52:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:31.561 16:52:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:31.561 16:52:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.561 16:52:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.561 16:52:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.490 16:52:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:33.490 00:10:33.490 real 0m22.614s 00:10:33.490 user 0m24.412s 00:10:33.490 sys 0m6.130s 00:10:33.490 16:52:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.490 16:52:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:33.490 ************************************ 00:10:33.490 END TEST nvmf_ns_masking 00:10:33.490 ************************************ 00:10:33.749 16:52:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:33.749 16:52:40 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:33.749 16:52:40 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:33.749 16:52:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:33.749 16:52:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.749 16:52:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:33.749 ************************************ 00:10:33.749 START TEST nvmf_nvme_cli 00:10:33.749 ************************************ 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:33.749 * Looking for test storage... 00:10:33.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.749 16:52:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:10:33.750 16:52:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:39.011 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:39.011 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:39.011 Found net devices under 0000:86:00.0: cvl_0_0 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:39.011 Found net devices under 0000:86:00.1: cvl_0_1 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:39.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:10:39.011 00:10:39.011 --- 10.0.0.2 ping statistics --- 00:10:39.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.011 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:10:39.011 00:10:39.011 --- 10.0.0.1 ping statistics --- 00:10:39.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.011 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=4193324 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 4193324 00:10:39.011 16:52:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.012 16:52:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 4193324 ']' 00:10:39.012 16:52:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.012 16:52:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:39.012 16:52:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.012 16:52:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:39.012 16:52:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:39.012 [2024-07-15 16:52:45.623433] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:10:39.012 [2024-07-15 16:52:45.623481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.012 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.269 [2024-07-15 16:52:45.683055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.269 [2024-07-15 16:52:45.764551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.269 [2024-07-15 16:52:45.764589] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.269 [2024-07-15 16:52:45.764597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.269 [2024-07-15 16:52:45.764603] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.269 [2024-07-15 16:52:45.764607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.269 [2024-07-15 16:52:45.764656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.269 [2024-07-15 16:52:45.764750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.269 [2024-07-15 16:52:45.764837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.269 [2024-07-15 16:52:45.764838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.834 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.834 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:10:39.834 16:52:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:39.834 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:39.834 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:39.834 16:52:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.834 16:52:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.834 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.834 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:39.834 [2024-07-15 16:52:46.475167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.834 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.834 16:52:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:39.834 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.834 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:39.834 Malloc0 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:40.092 Malloc1 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:40.092 [2024-07-15 16:52:46.557005] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:40.092 16:52:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.093 16:52:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:40.093 00:10:40.093 Discovery Log Number of Records 2, Generation counter 2 00:10:40.093 =====Discovery Log Entry 0====== 00:10:40.093 trtype: tcp 00:10:40.093 adrfam: ipv4 00:10:40.093 subtype: current discovery subsystem 00:10:40.093 treq: not required 00:10:40.093 portid: 0 00:10:40.093 trsvcid: 4420 00:10:40.093 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:40.093 traddr: 10.0.0.2 00:10:40.093 eflags: explicit discovery connections, duplicate discovery information 00:10:40.093 sectype: none 00:10:40.093 =====Discovery Log Entry 1====== 00:10:40.093 trtype: tcp 00:10:40.093 adrfam: ipv4 00:10:40.093 subtype: nvme subsystem 00:10:40.093 treq: not required 00:10:40.093 portid: 0 00:10:40.093 trsvcid: 4420 00:10:40.093 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:40.093 traddr: 10.0.0.2 00:10:40.093 eflags: none 00:10:40.093 sectype: none 00:10:40.093 16:52:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:40.093 16:52:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:40.093 16:52:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:40.093 16:52:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:40.093 16:52:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:40.093 16:52:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:40.093 16:52:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:40.093 16:52:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:40.093 16:52:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:40.093 16:52:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:40.093 16:52:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.466 16:52:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:41.466 16:52:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:10:41.466 16:52:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.466 16:52:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:41.466 16:52:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:41.466 16:52:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:10:43.362 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:43.362 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:43.363 /dev/nvme0n1 ]] 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:43.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:43.363 16:52:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:43.363 rmmod nvme_tcp 00:10:43.363 rmmod nvme_fabrics 00:10:43.363 rmmod nvme_keyring 00:10:43.363 16:52:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:43.363 16:52:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:10:43.363 16:52:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:10:43.363 16:52:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 4193324 ']' 00:10:43.363 16:52:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 4193324 00:10:43.363 16:52:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 4193324 ']' 00:10:43.363 16:52:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 4193324 00:10:43.363 16:52:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:10:43.363 16:52:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:43.363 16:52:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4193324 00:10:43.621 16:52:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:43.621 16:52:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:43.621 16:52:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4193324' 00:10:43.621 killing process with pid 4193324 00:10:43.621 16:52:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 4193324 00:10:43.621 16:52:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 4193324 00:10:43.621 16:52:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:43.621 16:52:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:43.621 16:52:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:43.621 16:52:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:43.621 16:52:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:43.621 16:52:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.621 16:52:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.621 16:52:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.151 16:52:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:46.151 00:10:46.151 real 0m12.128s 00:10:46.151 user 0m19.480s 00:10:46.151 sys 0m4.494s 00:10:46.151 16:52:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.151 16:52:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:46.151 ************************************ 00:10:46.151 END TEST nvmf_nvme_cli 00:10:46.151 ************************************ 00:10:46.151 16:52:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:46.151 16:52:52 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:46.152 16:52:52 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:46.152 16:52:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:46.152 16:52:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.152 16:52:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.152 ************************************ 00:10:46.152 START TEST nvmf_vfio_user 00:10:46.152 ************************************ 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:46.152 * Looking for test storage... 00:10:46.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=884 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 884' 00:10:46.152 Process pid: 884 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 884 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 884 ']' 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:46.152 16:52:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:46.152 [2024-07-15 16:52:52.536130] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:10:46.152 [2024-07-15 16:52:52.536185] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.152 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.152 [2024-07-15 16:52:52.590552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.152 [2024-07-15 16:52:52.671626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.152 [2024-07-15 16:52:52.671662] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.152 [2024-07-15 16:52:52.671669] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.152 [2024-07-15 16:52:52.671674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.152 [2024-07-15 16:52:52.671679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.152 [2024-07-15 16:52:52.671717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.152 [2024-07-15 16:52:52.671815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.152 [2024-07-15 16:52:52.671908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.152 [2024-07-15 16:52:52.671910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.727 16:52:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:46.727 16:52:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:46.727 16:52:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:48.099 16:52:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:48.099 16:52:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:48.099 16:52:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:48.099 16:52:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:48.099 16:52:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:48.099 16:52:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:48.099 Malloc1 00:10:48.099 16:52:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:48.355 16:52:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:48.612 16:52:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:48.868 16:52:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:48.868 16:52:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:48.868 16:52:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:48.868 Malloc2 00:10:48.868 16:52:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:49.125 16:52:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:49.382 16:52:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:49.382 16:52:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:49.382 16:52:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:49.640 16:52:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:49.640 16:52:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:49.640 16:52:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:49.640 16:52:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:49.640 [2024-07-15 16:52:56.076625] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:10:49.640 [2024-07-15 16:52:56.076660] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462 ] 00:10:49.640 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.640 [2024-07-15 16:52:56.106770] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:49.640 [2024-07-15 16:52:56.109509] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:49.640 [2024-07-15 16:52:56.109526] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f20a88de000 00:10:49.640 [2024-07-15 16:52:56.110508] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:49.640 [2024-07-15 16:52:56.111509] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:49.640 [2024-07-15 16:52:56.112513] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:49.640 [2024-07-15 16:52:56.113518] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:49.640 [2024-07-15 16:52:56.114521] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:49.640 [2024-07-15 16:52:56.115527] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:49.640 [2024-07-15 16:52:56.116527] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:49.640 [2024-07-15 16:52:56.117538] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:49.640 [2024-07-15 16:52:56.118545] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:49.640 [2024-07-15 16:52:56.118553] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f20a88d3000 00:10:49.640 [2024-07-15 16:52:56.119494] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:49.640 [2024-07-15 16:52:56.128109] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:49.640 [2024-07-15 16:52:56.128129] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:49.640 [2024-07-15 16:52:56.132636] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:49.640 [2024-07-15 16:52:56.132672] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:49.640 [2024-07-15 16:52:56.132741] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:49.640 [2024-07-15 16:52:56.132757] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:49.640 [2024-07-15 16:52:56.132762] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:49.640 [2024-07-15 16:52:56.135230] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:49.640 [2024-07-15 16:52:56.135238] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:49.640 [2024-07-15 16:52:56.135245] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:49.640 [2024-07-15 16:52:56.135633] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:49.640 [2024-07-15 16:52:56.135641] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:49.640 [2024-07-15 16:52:56.135647] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:49.640 [2024-07-15 16:52:56.136640] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:49.640 [2024-07-15 16:52:56.136648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:49.640 [2024-07-15 16:52:56.137650] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:49.640 [2024-07-15 16:52:56.137658] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:49.640 [2024-07-15 16:52:56.137662] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:49.640 [2024-07-15 16:52:56.137667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:49.640 [2024-07-15 16:52:56.137772] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:49.640 [2024-07-15 16:52:56.137777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:49.640 [2024-07-15 16:52:56.137781] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:49.640 [2024-07-15 16:52:56.138656] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:49.640 [2024-07-15 16:52:56.139664] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:49.640 [2024-07-15 16:52:56.140671] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:49.640 [2024-07-15 16:52:56.141669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:49.640 [2024-07-15 16:52:56.141731] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:49.640 [2024-07-15 16:52:56.142684] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:49.640 [2024-07-15 16:52:56.142691] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:49.640 [2024-07-15 16:52:56.142696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.142712] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:49.640 [2024-07-15 16:52:56.142720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.142733] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:49.640 [2024-07-15 16:52:56.142738] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:49.640 [2024-07-15 16:52:56.142750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:49.640 [2024-07-15 16:52:56.142790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:49.640 [2024-07-15 16:52:56.142798] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:49.640 [2024-07-15 16:52:56.142806] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:49.640 [2024-07-15 16:52:56.142811] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:49.640 [2024-07-15 16:52:56.142815] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:49.640 [2024-07-15 16:52:56.142819] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:49.640 [2024-07-15 16:52:56.142822] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:49.640 [2024-07-15 16:52:56.142826] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.142833] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.142842] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:49.640 [2024-07-15 16:52:56.142852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:49.640 [2024-07-15 16:52:56.142864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.640 [2024-07-15 16:52:56.142871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.640 [2024-07-15 16:52:56.142879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.640 [2024-07-15 16:52:56.142886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.640 [2024-07-15 16:52:56.142890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.142898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.142908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:49.640 [2024-07-15 16:52:56.142916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:49.640 [2024-07-15 16:52:56.142921] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:49.640 [2024-07-15 16:52:56.142925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.142931] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.142936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.142944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:49.640 [2024-07-15 16:52:56.142952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:49.640 [2024-07-15 16:52:56.142999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.143006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.143013] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:49.640 [2024-07-15 16:52:56.143017] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:49.640 [2024-07-15 16:52:56.143022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:49.640 [2024-07-15 16:52:56.143037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:49.640 [2024-07-15 16:52:56.143045] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:49.640 [2024-07-15 16:52:56.143056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.143062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.143069] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:49.640 [2024-07-15 16:52:56.143072] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:49.640 [2024-07-15 16:52:56.143078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:49.640 [2024-07-15 16:52:56.143097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:49.640 [2024-07-15 16:52:56.143108] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.143115] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.143121] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:49.640 [2024-07-15 16:52:56.143125] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:49.640 [2024-07-15 16:52:56.143132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:49.640 [2024-07-15 16:52:56.143146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:49.640 [2024-07-15 16:52:56.143153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.143159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.143165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.143171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.143175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.143179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.143184] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:49.640 [2024-07-15 16:52:56.143187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:49.640 [2024-07-15 16:52:56.143192] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:49.640 [2024-07-15 16:52:56.143207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:49.640 [2024-07-15 16:52:56.143216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:49.640 [2024-07-15 16:52:56.143230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:49.640 [2024-07-15 16:52:56.143236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:49.640 [2024-07-15 16:52:56.143245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:49.640 [2024-07-15 16:52:56.143255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:49.640 [2024-07-15 16:52:56.143265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:49.640 [2024-07-15 16:52:56.143275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:49.640 [2024-07-15 16:52:56.143286] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:49.640 [2024-07-15 16:52:56.143290] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:49.640 [2024-07-15 16:52:56.143294] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:49.640 [2024-07-15 16:52:56.143296] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:49.640 [2024-07-15 16:52:56.143302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:49.640 [2024-07-15 16:52:56.143309] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:49.640 [2024-07-15 16:52:56.143312] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:49.640 [2024-07-15 16:52:56.143318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:49.641 [2024-07-15 16:52:56.143325] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:49.641 [2024-07-15 16:52:56.143329] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:49.641 [2024-07-15 16:52:56.143335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:49.641 [2024-07-15 16:52:56.143342] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:49.641 [2024-07-15 16:52:56.143345] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:49.641 [2024-07-15 16:52:56.143350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:49.641 [2024-07-15 16:52:56.143357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:49.641 [2024-07-15 16:52:56.143367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:49.641 [2024-07-15 16:52:56.143377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:49.641 [2024-07-15 16:52:56.143383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:49.641 ===================================================== 00:10:49.641 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:49.641 ===================================================== 00:10:49.641 Controller Capabilities/Features 00:10:49.641 ================================ 00:10:49.641 Vendor ID: 4e58 00:10:49.641 Subsystem Vendor ID: 4e58 00:10:49.641 Serial Number: SPDK1 00:10:49.641 Model Number: SPDK bdev Controller 00:10:49.641 Firmware Version: 24.09 00:10:49.641 Recommended Arb Burst: 6 00:10:49.641 IEEE OUI Identifier: 8d 6b 50 00:10:49.641 Multi-path I/O 00:10:49.641 May have multiple subsystem ports: Yes 00:10:49.641 May have multiple controllers: Yes 00:10:49.641 Associated with SR-IOV VF: No 00:10:49.641 Max Data Transfer Size: 131072 00:10:49.641 Max Number of Namespaces: 32 00:10:49.641 Max Number of I/O Queues: 127 00:10:49.641 NVMe Specification Version (VS): 1.3 00:10:49.641 NVMe Specification Version (Identify): 1.3 00:10:49.641 Maximum Queue Entries: 256 00:10:49.641 Contiguous Queues Required: Yes 00:10:49.641 Arbitration Mechanisms Supported 00:10:49.641 Weighted Round Robin: Not Supported 00:10:49.641 Vendor Specific: Not Supported 00:10:49.641 Reset Timeout: 15000 ms 00:10:49.641 Doorbell Stride: 4 bytes 00:10:49.641 NVM Subsystem Reset: Not Supported 00:10:49.641 Command Sets Supported 00:10:49.641 NVM Command Set: Supported 00:10:49.641 Boot Partition: Not Supported 00:10:49.641 Memory Page Size Minimum: 4096 bytes 00:10:49.641 Memory Page Size Maximum: 4096 bytes 00:10:49.641 Persistent Memory Region: Not Supported 00:10:49.641 Optional Asynchronous Events Supported 00:10:49.641 Namespace Attribute Notices: Supported 00:10:49.641 Firmware Activation Notices: Not Supported 00:10:49.641 ANA Change Notices: Not Supported 00:10:49.641 PLE Aggregate Log Change Notices: Not Supported 00:10:49.641 LBA Status Info Alert Notices: Not Supported 00:10:49.641 EGE Aggregate Log Change Notices: Not Supported 00:10:49.641 Normal NVM Subsystem Shutdown event: Not Supported 00:10:49.641 Zone Descriptor Change Notices: Not Supported 00:10:49.641 Discovery Log Change Notices: Not Supported 00:10:49.641 Controller Attributes 00:10:49.641 128-bit Host Identifier: Supported 00:10:49.641 Non-Operational Permissive Mode: Not Supported 00:10:49.641 NVM Sets: Not Supported 00:10:49.641 Read Recovery Levels: Not Supported 00:10:49.641 Endurance Groups: Not Supported 00:10:49.641 Predictable Latency Mode: Not Supported 00:10:49.641 Traffic Based Keep ALive: Not Supported 00:10:49.641 Namespace Granularity: Not Supported 00:10:49.641 SQ Associations: Not Supported 00:10:49.641 UUID List: Not Supported 00:10:49.641 Multi-Domain Subsystem: Not Supported 00:10:49.641 Fixed Capacity Management: Not Supported 00:10:49.641 Variable Capacity Management: Not Supported 00:10:49.641 Delete Endurance Group: Not Supported 00:10:49.641 Delete NVM Set: Not Supported 00:10:49.641 Extended LBA Formats Supported: Not Supported 00:10:49.641 Flexible Data Placement Supported: Not Supported 00:10:49.641 00:10:49.641 Controller Memory Buffer Support 00:10:49.641 ================================ 00:10:49.641 Supported: No 00:10:49.641 00:10:49.641 Persistent Memory Region Support 00:10:49.641 ================================ 00:10:49.641 Supported: No 00:10:49.641 00:10:49.641 Admin Command Set Attributes 00:10:49.641 ============================ 00:10:49.641 Security Send/Receive: Not Supported 00:10:49.641 Format NVM: Not Supported 00:10:49.641 Firmware Activate/Download: Not Supported 00:10:49.641 Namespace Management: Not Supported 00:10:49.641 Device Self-Test: Not Supported 00:10:49.641 Directives: Not Supported 00:10:49.641 NVMe-MI: Not Supported 00:10:49.641 Virtualization Management: Not Supported 00:10:49.641 Doorbell Buffer Config: Not Supported 00:10:49.641 Get LBA Status Capability: Not Supported 00:10:49.641 Command & Feature Lockdown Capability: Not Supported 00:10:49.641 Abort Command Limit: 4 00:10:49.641 Async Event Request Limit: 4 00:10:49.641 Number of Firmware Slots: N/A 00:10:49.641 Firmware Slot 1 Read-Only: N/A 00:10:49.641 Firmware Activation Without Reset: N/A 00:10:49.641 Multiple Update Detection Support: N/A 00:10:49.641 Firmware Update Granularity: No Information Provided 00:10:49.641 Per-Namespace SMART Log: No 00:10:49.641 Asymmetric Namespace Access Log Page: Not Supported 00:10:49.641 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:49.641 Command Effects Log Page: Supported 00:10:49.641 Get Log Page Extended Data: Supported 00:10:49.641 Telemetry Log Pages: Not Supported 00:10:49.641 Persistent Event Log Pages: Not Supported 00:10:49.641 Supported Log Pages Log Page: May Support 00:10:49.641 Commands Supported & Effects Log Page: Not Supported 00:10:49.641 Feature Identifiers & Effects Log Page:May Support 00:10:49.641 NVMe-MI Commands & Effects Log Page: May Support 00:10:49.641 Data Area 4 for Telemetry Log: Not Supported 00:10:49.641 Error Log Page Entries Supported: 128 00:10:49.641 Keep Alive: Supported 00:10:49.641 Keep Alive Granularity: 10000 ms 00:10:49.641 00:10:49.641 NVM Command Set Attributes 00:10:49.641 ========================== 00:10:49.641 Submission Queue Entry Size 00:10:49.641 Max: 64 00:10:49.641 Min: 64 00:10:49.641 Completion Queue Entry Size 00:10:49.641 Max: 16 00:10:49.641 Min: 16 00:10:49.641 Number of Namespaces: 32 00:10:49.641 Compare Command: Supported 00:10:49.641 Write Uncorrectable Command: Not Supported 00:10:49.641 Dataset Management Command: Supported 00:10:49.641 Write Zeroes Command: Supported 00:10:49.641 Set Features Save Field: Not Supported 00:10:49.641 Reservations: Not Supported 00:10:49.641 Timestamp: Not Supported 00:10:49.641 Copy: Supported 00:10:49.641 Volatile Write Cache: Present 00:10:49.641 Atomic Write Unit (Normal): 1 00:10:49.641 Atomic Write Unit (PFail): 1 00:10:49.641 Atomic Compare & Write Unit: 1 00:10:49.641 Fused Compare & Write: Supported 00:10:49.641 Scatter-Gather List 00:10:49.641 SGL Command Set: Supported (Dword aligned) 00:10:49.641 SGL Keyed: Not Supported 00:10:49.641 SGL Bit Bucket Descriptor: Not Supported 00:10:49.641 SGL Metadata Pointer: Not Supported 00:10:49.641 Oversized SGL: Not Supported 00:10:49.641 SGL Metadata Address: Not Supported 00:10:49.641 SGL Offset: Not Supported 00:10:49.641 Transport SGL Data Block: Not Supported 00:10:49.641 Replay Protected Memory Block: Not Supported 00:10:49.641 00:10:49.641 Firmware Slot Information 00:10:49.641 ========================= 00:10:49.641 Active slot: 1 00:10:49.641 Slot 1 Firmware Revision: 24.09 00:10:49.641 00:10:49.641 00:10:49.641 Commands Supported and Effects 00:10:49.641 ============================== 00:10:49.641 Admin Commands 00:10:49.641 -------------- 00:10:49.641 Get Log Page (02h): Supported 00:10:49.641 Identify (06h): Supported 00:10:49.641 Abort (08h): Supported 00:10:49.641 Set Features (09h): Supported 00:10:49.641 Get Features (0Ah): Supported 00:10:49.641 Asynchronous Event Request (0Ch): Supported 00:10:49.641 Keep Alive (18h): Supported 00:10:49.641 I/O Commands 00:10:49.641 ------------ 00:10:49.641 Flush (00h): Supported LBA-Change 00:10:49.641 Write (01h): Supported LBA-Change 00:10:49.641 Read (02h): Supported 00:10:49.641 Compare (05h): Supported 00:10:49.641 Write Zeroes (08h): Supported LBA-Change 00:10:49.641 Dataset Management (09h): Supported LBA-Change 00:10:49.641 Copy (19h): Supported LBA-Change 00:10:49.641 00:10:49.641 Error Log 00:10:49.641 ========= 00:10:49.641 00:10:49.641 Arbitration 00:10:49.641 =========== 00:10:49.641 Arbitration Burst: 1 00:10:49.641 00:10:49.641 Power Management 00:10:49.641 ================ 00:10:49.641 Number of Power States: 1 00:10:49.641 Current Power State: Power State #0 00:10:49.641 Power State #0: 00:10:49.641 Max Power: 0.00 W 00:10:49.641 Non-Operational State: Operational 00:10:49.641 Entry Latency: Not Reported 00:10:49.641 Exit Latency: Not Reported 00:10:49.641 Relative Read Throughput: 0 00:10:49.641 Relative Read Latency: 0 00:10:49.641 Relative Write Throughput: 0 00:10:49.641 Relative Write Latency: 0 00:10:49.641 Idle Power: Not Reported 00:10:49.641 Active Power: Not Reported 00:10:49.641 Non-Operational Permissive Mode: Not Supported 00:10:49.641 00:10:49.641 Health Information 00:10:49.641 ================== 00:10:49.641 Critical Warnings: 00:10:49.641 Available Spare Space: OK 00:10:49.641 Temperature: OK 00:10:49.641 Device Reliability: OK 00:10:49.641 Read Only: No 00:10:49.641 Volatile Memory Backup: OK 00:10:49.641 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:49.641 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:49.641 Available Spare: 0% 00:10:49.641 Available Sp[2024-07-15 16:52:56.143470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:49.641 [2024-07-15 16:52:56.143481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:49.641 [2024-07-15 16:52:56.143507] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:49.641 [2024-07-15 16:52:56.143515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.641 [2024-07-15 16:52:56.143521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.641 [2024-07-15 16:52:56.143526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.641 [2024-07-15 16:52:56.143532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.641 [2024-07-15 16:52:56.143693] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:49.641 [2024-07-15 16:52:56.143702] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:49.641 [2024-07-15 16:52:56.144697] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:49.641 [2024-07-15 16:52:56.144743] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:49.641 [2024-07-15 16:52:56.144750] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:49.641 [2024-07-15 16:52:56.145705] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:49.641 [2024-07-15 16:52:56.145715] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:49.641 [2024-07-15 16:52:56.145760] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:49.641 [2024-07-15 16:52:56.150232] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:49.641 are Threshold: 0% 00:10:49.641 Life Percentage Used: 0% 00:10:49.641 Data Units Read: 0 00:10:49.641 Data Units Written: 0 00:10:49.641 Host Read Commands: 0 00:10:49.641 Host Write Commands: 0 00:10:49.641 Controller Busy Time: 0 minutes 00:10:49.641 Power Cycles: 0 00:10:49.641 Power On Hours: 0 hours 00:10:49.641 Unsafe Shutdowns: 0 00:10:49.641 Unrecoverable Media Errors: 0 00:10:49.641 Lifetime Error Log Entries: 0 00:10:49.641 Warning Temperature Time: 0 minutes 00:10:49.641 Critical Temperature Time: 0 minutes 00:10:49.641 00:10:49.641 Number of Queues 00:10:49.641 ================ 00:10:49.641 Number of I/O Submission Queues: 127 00:10:49.641 Number of I/O Completion Queues: 127 00:10:49.641 00:10:49.641 Active Namespaces 00:10:49.641 ================= 00:10:49.641 Namespace ID:1 00:10:49.641 Error Recovery Timeout: Unlimited 00:10:49.641 Command Set Identifier: NVM (00h) 00:10:49.641 Deallocate: Supported 00:10:49.641 Deallocated/Unwritten Error: Not Supported 00:10:49.641 Deallocated Read Value: Unknown 00:10:49.641 Deallocate in Write Zeroes: Not Supported 00:10:49.641 Deallocated Guard Field: 0xFFFF 00:10:49.641 Flush: Supported 00:10:49.641 Reservation: Supported 00:10:49.641 Namespace Sharing Capabilities: Multiple Controllers 00:10:49.641 Size (in LBAs): 131072 (0GiB) 00:10:49.641 Capacity (in LBAs): 131072 (0GiB) 00:10:49.641 Utilization (in LBAs): 131072 (0GiB) 00:10:49.641 NGUID: 3A17FD44A6CB4C0E888CAAA300E4965F 00:10:49.641 UUID: 3a17fd44-a6cb-4c0e-888c-aaa300e4965f 00:10:49.641 Thin Provisioning: Not Supported 00:10:49.641 Per-NS Atomic Units: Yes 00:10:49.641 Atomic Boundary Size (Normal): 0 00:10:49.641 Atomic Boundary Size (PFail): 0 00:10:49.641 Atomic Boundary Offset: 0 00:10:49.641 Maximum Single Source Range Length: 65535 00:10:49.641 Maximum Copy Length: 65535 00:10:49.641 Maximum Source Range Count: 1 00:10:49.641 NGUID/EUI64 Never Reused: No 00:10:49.641 Namespace Write Protected: No 00:10:49.641 Number of LBA Formats: 1 00:10:49.641 Current LBA Format: LBA Format #00 00:10:49.641 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:49.641 00:10:49.641 16:52:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:49.641 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.898 [2024-07-15 16:52:56.369053] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:55.155 Initializing NVMe Controllers 00:10:55.155 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:55.155 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:55.155 Initialization complete. Launching workers. 00:10:55.155 ======================================================== 00:10:55.155 Latency(us) 00:10:55.155 Device Information : IOPS MiB/s Average min max 00:10:55.155 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39909.76 155.90 3207.05 947.34 7296.21 00:10:55.155 ======================================================== 00:10:55.155 Total : 39909.76 155.90 3207.05 947.34 7296.21 00:10:55.155 00:10:55.155 [2024-07-15 16:53:01.387070] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:55.155 16:53:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:55.155 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.155 [2024-07-15 16:53:01.613133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:00.493 Initializing NVMe Controllers 00:11:00.493 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:00.493 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:00.493 Initialization complete. Launching workers. 00:11:00.493 ======================================================== 00:11:00.493 Latency(us) 00:11:00.493 Device Information : IOPS MiB/s Average min max 00:11:00.493 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.07 62.70 7979.87 6974.52 8982.23 00:11:00.493 ======================================================== 00:11:00.493 Total : 16051.07 62.70 7979.87 6974.52 8982.23 00:11:00.493 00:11:00.493 [2024-07-15 16:53:06.656555] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:00.493 16:53:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:00.493 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.493 [2024-07-15 16:53:06.853516] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:05.756 [2024-07-15 16:53:11.962727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:05.756 Initializing NVMe Controllers 00:11:05.756 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:05.756 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:05.756 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:05.756 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:05.756 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:05.756 Initialization complete. Launching workers. 00:11:05.756 Starting thread on core 2 00:11:05.756 Starting thread on core 3 00:11:05.756 Starting thread on core 1 00:11:05.756 16:53:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:05.756 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.756 [2024-07-15 16:53:12.248617] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:09.036 [2024-07-15 16:53:15.298484] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:09.036 Initializing NVMe Controllers 00:11:09.036 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:09.036 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:09.036 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:09.036 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:09.036 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:09.036 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:09.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:09.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:09.036 Initialization complete. Launching workers. 00:11:09.036 Starting thread on core 1 with urgent priority queue 00:11:09.036 Starting thread on core 2 with urgent priority queue 00:11:09.036 Starting thread on core 3 with urgent priority queue 00:11:09.036 Starting thread on core 0 with urgent priority queue 00:11:09.036 SPDK bdev Controller (SPDK1 ) core 0: 2344.33 IO/s 42.66 secs/100000 ios 00:11:09.036 SPDK bdev Controller (SPDK1 ) core 1: 2439.67 IO/s 40.99 secs/100000 ios 00:11:09.036 SPDK bdev Controller (SPDK1 ) core 2: 2858.33 IO/s 34.99 secs/100000 ios 00:11:09.036 SPDK bdev Controller (SPDK1 ) core 3: 2835.33 IO/s 35.27 secs/100000 ios 00:11:09.036 ======================================================== 00:11:09.036 00:11:09.036 16:53:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:09.036 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.036 [2024-07-15 16:53:15.569598] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:09.036 Initializing NVMe Controllers 00:11:09.036 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:09.036 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:09.036 Namespace ID: 1 size: 0GB 00:11:09.036 Initialization complete. 00:11:09.036 INFO: using host memory buffer for IO 00:11:09.036 Hello world! 00:11:09.036 [2024-07-15 16:53:15.603838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:09.036 16:53:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:09.036 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.292 [2024-07-15 16:53:15.876279] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:10.662 Initializing NVMe Controllers 00:11:10.662 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:10.662 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:10.662 Initialization complete. Launching workers. 00:11:10.662 submit (in ns) avg, min, max = 7142.9, 3199.1, 4002930.4 00:11:10.662 complete (in ns) avg, min, max = 21273.2, 1762.6, 4007954.8 00:11:10.662 00:11:10.662 Submit histogram 00:11:10.662 ================ 00:11:10.662 Range in us Cumulative Count 00:11:10.662 3.186 - 3.200: 0.0061% ( 1) 00:11:10.662 3.200 - 3.214: 0.0244% ( 3) 00:11:10.662 3.214 - 3.228: 0.0731% ( 8) 00:11:10.662 3.228 - 3.242: 0.1340% ( 10) 00:11:10.662 3.242 - 3.256: 0.2253% ( 15) 00:11:10.662 3.256 - 3.270: 0.3410% ( 19) 00:11:10.662 3.270 - 3.283: 0.6212% ( 46) 00:11:10.662 3.283 - 3.297: 1.6017% ( 161) 00:11:10.662 3.297 - 3.311: 2.9903% ( 228) 00:11:10.662 3.311 - 3.325: 4.6894% ( 279) 00:11:10.662 3.325 - 3.339: 6.7905% ( 345) 00:11:10.662 3.339 - 3.353: 10.0122% ( 529) 00:11:10.662 3.353 - 3.367: 15.2192% ( 855) 00:11:10.662 3.367 - 3.381: 20.9135% ( 935) 00:11:10.662 3.381 - 3.395: 26.8453% ( 974) 00:11:10.662 3.395 - 3.409: 32.4970% ( 928) 00:11:10.662 3.409 - 3.423: 37.9659% ( 898) 00:11:10.662 3.423 - 3.437: 43.3069% ( 877) 00:11:10.662 3.437 - 3.450: 49.5859% ( 1031) 00:11:10.662 3.450 - 3.464: 54.5128% ( 809) 00:11:10.662 3.464 - 3.478: 58.6114% ( 673) 00:11:10.662 3.478 - 3.492: 62.8502% ( 696) 00:11:10.662 3.492 - 3.506: 68.9464% ( 1001) 00:11:10.662 3.506 - 3.520: 73.7576% ( 790) 00:11:10.662 3.520 - 3.534: 77.0402% ( 539) 00:11:10.662 3.534 - 3.548: 80.6638% ( 595) 00:11:10.662 3.548 - 3.562: 83.4957% ( 465) 00:11:10.662 3.562 - 3.590: 86.5469% ( 501) 00:11:10.662 3.590 - 3.617: 87.8745% ( 218) 00:11:10.662 3.617 - 3.645: 88.8672% ( 163) 00:11:10.662 3.645 - 3.673: 90.5055% ( 269) 00:11:10.662 3.673 - 3.701: 92.2594% ( 288) 00:11:10.662 3.701 - 3.729: 93.8368% ( 259) 00:11:10.662 3.729 - 3.757: 95.5907% ( 288) 00:11:10.662 3.757 - 3.784: 97.0950% ( 247) 00:11:10.662 3.784 - 3.812: 98.1608% ( 175) 00:11:10.662 3.812 - 3.840: 98.7881% ( 103) 00:11:10.662 3.840 - 3.868: 99.1048% ( 52) 00:11:10.662 3.868 - 3.896: 99.3666% ( 43) 00:11:10.662 3.896 - 3.923: 99.4945% ( 21) 00:11:10.662 3.923 - 3.951: 99.5128% ( 3) 00:11:10.662 3.951 - 3.979: 99.5189% ( 1) 00:11:10.662 3.979 - 4.007: 99.5250% ( 1) 00:11:10.662 4.063 - 4.090: 99.5311% ( 1) 00:11:10.662 4.953 - 4.981: 99.5371% ( 1) 00:11:10.662 5.231 - 5.259: 99.5432% ( 1) 00:11:10.662 5.343 - 5.370: 99.5493% ( 1) 00:11:10.662 5.649 - 5.677: 99.5615% ( 2) 00:11:10.662 5.677 - 5.704: 99.5676% ( 1) 00:11:10.662 5.816 - 5.843: 99.5737% ( 1) 00:11:10.662 5.899 - 5.927: 99.5798% ( 1) 00:11:10.662 5.955 - 5.983: 99.5859% ( 1) 00:11:10.662 5.983 - 6.010: 99.5920% ( 1) 00:11:10.662 6.066 - 6.094: 99.5981% ( 1) 00:11:10.662 6.122 - 6.150: 99.6041% ( 1) 00:11:10.662 6.233 - 6.261: 99.6102% ( 1) 00:11:10.662 6.372 - 6.400: 99.6163% ( 1) 00:11:10.662 6.400 - 6.428: 99.6285% ( 2) 00:11:10.662 6.428 - 6.456: 99.6346% ( 1) 00:11:10.662 6.456 - 6.483: 99.6468% ( 2) 00:11:10.662 6.567 - 6.595: 99.6590% ( 2) 00:11:10.662 6.650 - 6.678: 99.6650% ( 1) 00:11:10.662 6.678 - 6.706: 99.6711% ( 1) 00:11:10.662 6.706 - 6.734: 99.6772% ( 1) 00:11:10.662 6.929 - 6.957: 99.6894% ( 2) 00:11:10.662 7.012 - 7.040: 99.6955% ( 1) 00:11:10.662 7.040 - 7.068: 99.7016% ( 1) 00:11:10.662 7.068 - 7.096: 99.7138% ( 2) 00:11:10.662 7.290 - 7.346: 99.7259% ( 2) 00:11:10.662 7.346 - 7.402: 99.7381% ( 2) 00:11:10.662 7.402 - 7.457: 99.7442% ( 1) 00:11:10.662 7.624 - 7.680: 99.7564% ( 2) 00:11:10.662 7.680 - 7.736: 99.7625% ( 1) 00:11:10.662 7.791 - 7.847: 99.7686% ( 1) 00:11:10.662 7.847 - 7.903: 99.7747% ( 1) 00:11:10.662 8.070 - 8.125: 99.7868% ( 2) 00:11:10.662 8.570 - 8.626: 99.8051% ( 3) 00:11:10.662 8.793 - 8.849: 99.8112% ( 1) 00:11:10.662 8.849 - 8.904: 99.8173% ( 1) 00:11:10.662 [2024-07-15 16:53:16.900243] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:10.662 9.071 - 9.127: 99.8234% ( 1) 00:11:10.662 9.183 - 9.238: 99.8295% ( 1) 00:11:10.662 9.350 - 9.405: 99.8356% ( 1) 00:11:10.662 9.906 - 9.962: 99.8417% ( 1) 00:11:10.662 9.962 - 10.017: 99.8477% ( 1) 00:11:10.662 13.690 - 13.746: 99.8538% ( 1) 00:11:10.662 13.802 - 13.857: 99.8599% ( 1) 00:11:10.662 13.913 - 13.969: 99.8782% ( 3) 00:11:10.662 14.024 - 14.080: 99.8843% ( 1) 00:11:10.662 18.922 - 19.033: 99.8904% ( 1) 00:11:10.662 19.256 - 19.367: 99.9026% ( 2) 00:11:10.662 22.261 - 22.372: 99.9086% ( 1) 00:11:10.662 3989.148 - 4017.642: 100.0000% ( 15) 00:11:10.662 00:11:10.662 Complete histogram 00:11:10.662 ================== 00:11:10.662 Range in us Cumulative Count 00:11:10.662 1.760 - 1.767: 0.0244% ( 4) 00:11:10.662 1.767 - 1.774: 0.0974% ( 12) 00:11:10.662 1.774 - 1.781: 0.1766% ( 13) 00:11:10.662 1.781 - 1.795: 0.2619% ( 14) 00:11:10.662 1.795 - 1.809: 0.2862% ( 4) 00:11:10.662 1.809 - 1.823: 1.7722% ( 244) 00:11:10.662 1.823 - 1.837: 6.6931% ( 808) 00:11:10.662 1.837 - 1.850: 9.2753% ( 424) 00:11:10.662 1.850 - 1.864: 10.1157% ( 138) 00:11:10.662 1.864 - 1.878: 26.1145% ( 2627) 00:11:10.662 1.878 - 1.892: 75.7856% ( 8156) 00:11:10.662 1.892 - 1.906: 92.2838% ( 2709) 00:11:10.662 1.906 - 1.920: 95.0914% ( 461) 00:11:10.662 1.920 - 1.934: 96.0962% ( 165) 00:11:10.662 1.934 - 1.948: 96.7113% ( 101) 00:11:10.662 1.948 - 1.962: 98.0268% ( 216) 00:11:10.662 1.962 - 1.976: 98.9160% ( 146) 00:11:10.662 1.976 - 1.990: 99.1961% ( 46) 00:11:10.662 1.990 - 2.003: 99.2326% ( 6) 00:11:10.662 2.003 - 2.017: 99.2448% ( 2) 00:11:10.662 2.017 - 2.031: 99.2509% ( 1) 00:11:10.662 2.031 - 2.045: 99.2570% ( 1) 00:11:10.662 2.073 - 2.087: 99.2631% ( 1) 00:11:10.662 2.101 - 2.115: 99.2692% ( 1) 00:11:10.662 2.310 - 2.323: 99.2753% ( 1) 00:11:10.662 3.757 - 3.784: 99.2814% ( 1) 00:11:10.662 3.784 - 3.812: 99.2875% ( 1) 00:11:10.662 3.868 - 3.896: 99.2935% ( 1) 00:11:10.662 3.896 - 3.923: 99.2996% ( 1) 00:11:10.662 4.090 - 4.118: 99.3057% ( 1) 00:11:10.662 4.369 - 4.397: 99.3118% ( 1) 00:11:10.662 4.424 - 4.452: 99.3240% ( 2) 00:11:10.662 4.452 - 4.480: 99.3301% ( 1) 00:11:10.662 4.480 - 4.508: 99.3362% ( 1) 00:11:10.663 4.703 - 4.730: 99.3423% ( 1) 00:11:10.663 5.092 - 5.120: 99.3484% ( 1) 00:11:10.663 5.120 - 5.148: 99.3666% ( 3) 00:11:10.663 5.203 - 5.231: 99.3727% ( 1) 00:11:10.663 5.287 - 5.315: 99.3788% ( 1) 00:11:10.663 5.426 - 5.454: 99.3849% ( 1) 00:11:10.663 5.482 - 5.510: 99.3910% ( 1) 00:11:10.663 5.510 - 5.537: 99.3971% ( 1) 00:11:10.663 5.843 - 5.871: 99.4032% ( 1) 00:11:10.663 5.871 - 5.899: 99.4093% ( 1) 00:11:10.663 5.955 - 5.983: 99.4153% ( 1) 00:11:10.663 6.122 - 6.150: 99.4214% ( 1) 00:11:10.663 6.150 - 6.177: 99.4275% ( 1) 00:11:10.663 6.233 - 6.261: 99.4336% ( 1) 00:11:10.663 6.873 - 6.901: 99.4397% ( 1) 00:11:10.663 7.123 - 7.179: 99.4458% ( 1) 00:11:10.663 7.290 - 7.346: 99.4519% ( 1) 00:11:10.663 7.402 - 7.457: 99.4580% ( 1) 00:11:10.663 7.680 - 7.736: 99.4641% ( 1) 00:11:10.663 7.958 - 8.014: 99.4702% ( 1) 00:11:10.663 9.016 - 9.071: 99.4762% ( 1) 00:11:10.663 12.188 - 12.243: 99.4823% ( 1) 00:11:10.663 17.363 - 17.475: 99.4884% ( 1) 00:11:10.663 17.586 - 17.697: 99.4945% ( 1) 00:11:10.663 17.697 - 17.809: 99.5067% ( 2) 00:11:10.663 39.847 - 40.070: 99.5128% ( 1) 00:11:10.663 3091.590 - 3105.837: 99.5189% ( 1) 00:11:10.663 3447.763 - 3462.010: 99.5250% ( 1) 00:11:10.663 3989.148 - 4017.642: 100.0000% ( 78) 00:11:10.663 00:11:10.663 16:53:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:10.663 16:53:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:10.663 16:53:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:10.663 16:53:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:10.663 16:53:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:10.663 [ 00:11:10.663 { 00:11:10.663 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:10.663 "subtype": "Discovery", 00:11:10.663 "listen_addresses": [], 00:11:10.663 "allow_any_host": true, 00:11:10.663 "hosts": [] 00:11:10.663 }, 00:11:10.663 { 00:11:10.663 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:10.663 "subtype": "NVMe", 00:11:10.663 "listen_addresses": [ 00:11:10.663 { 00:11:10.663 "trtype": "VFIOUSER", 00:11:10.663 "adrfam": "IPv4", 00:11:10.663 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:10.663 "trsvcid": "0" 00:11:10.663 } 00:11:10.663 ], 00:11:10.663 "allow_any_host": true, 00:11:10.663 "hosts": [], 00:11:10.663 "serial_number": "SPDK1", 00:11:10.663 "model_number": "SPDK bdev Controller", 00:11:10.663 "max_namespaces": 32, 00:11:10.663 "min_cntlid": 1, 00:11:10.663 "max_cntlid": 65519, 00:11:10.663 "namespaces": [ 00:11:10.663 { 00:11:10.663 "nsid": 1, 00:11:10.663 "bdev_name": "Malloc1", 00:11:10.663 "name": "Malloc1", 00:11:10.663 "nguid": "3A17FD44A6CB4C0E888CAAA300E4965F", 00:11:10.663 "uuid": "3a17fd44-a6cb-4c0e-888c-aaa300e4965f" 00:11:10.663 } 00:11:10.663 ] 00:11:10.663 }, 00:11:10.663 { 00:11:10.663 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:10.663 "subtype": "NVMe", 00:11:10.663 "listen_addresses": [ 00:11:10.663 { 00:11:10.663 "trtype": "VFIOUSER", 00:11:10.663 "adrfam": "IPv4", 00:11:10.663 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:10.663 "trsvcid": "0" 00:11:10.663 } 00:11:10.663 ], 00:11:10.663 "allow_any_host": true, 00:11:10.663 "hosts": [], 00:11:10.663 "serial_number": "SPDK2", 00:11:10.663 "model_number": "SPDK bdev Controller", 00:11:10.663 "max_namespaces": 32, 00:11:10.663 "min_cntlid": 1, 00:11:10.663 "max_cntlid": 65519, 00:11:10.663 "namespaces": [ 00:11:10.663 { 00:11:10.663 "nsid": 1, 00:11:10.663 "bdev_name": "Malloc2", 00:11:10.663 "name": "Malloc2", 00:11:10.663 "nguid": "FB352088C2B649B0A0F59A2106B1D60B", 00:11:10.663 "uuid": "fb352088-c2b6-49b0-a0f5-9a2106b1d60b" 00:11:10.663 } 00:11:10.663 ] 00:11:10.663 } 00:11:10.663 ] 00:11:10.663 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:10.663 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:10.663 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=5127 00:11:10.663 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:10.663 16:53:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:10.663 16:53:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:10.663 16:53:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:10.663 16:53:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:10.663 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:10.663 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:10.663 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.663 [2024-07-15 16:53:17.257658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:10.663 Malloc3 00:11:10.920 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:10.920 [2024-07-15 16:53:17.499471] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:10.920 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:10.920 Asynchronous Event Request test 00:11:10.920 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:10.920 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:10.920 Registering asynchronous event callbacks... 00:11:10.920 Starting namespace attribute notice tests for all controllers... 00:11:10.920 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:10.920 aer_cb - Changed Namespace 00:11:10.920 Cleaning up... 00:11:11.177 [ 00:11:11.177 { 00:11:11.177 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:11.177 "subtype": "Discovery", 00:11:11.177 "listen_addresses": [], 00:11:11.177 "allow_any_host": true, 00:11:11.177 "hosts": [] 00:11:11.177 }, 00:11:11.177 { 00:11:11.177 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:11.177 "subtype": "NVMe", 00:11:11.177 "listen_addresses": [ 00:11:11.177 { 00:11:11.177 "trtype": "VFIOUSER", 00:11:11.177 "adrfam": "IPv4", 00:11:11.177 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:11.177 "trsvcid": "0" 00:11:11.177 } 00:11:11.177 ], 00:11:11.177 "allow_any_host": true, 00:11:11.177 "hosts": [], 00:11:11.177 "serial_number": "SPDK1", 00:11:11.177 "model_number": "SPDK bdev Controller", 00:11:11.177 "max_namespaces": 32, 00:11:11.177 "min_cntlid": 1, 00:11:11.177 "max_cntlid": 65519, 00:11:11.177 "namespaces": [ 00:11:11.177 { 00:11:11.177 "nsid": 1, 00:11:11.177 "bdev_name": "Malloc1", 00:11:11.177 "name": "Malloc1", 00:11:11.177 "nguid": "3A17FD44A6CB4C0E888CAAA300E4965F", 00:11:11.177 "uuid": "3a17fd44-a6cb-4c0e-888c-aaa300e4965f" 00:11:11.177 }, 00:11:11.177 { 00:11:11.177 "nsid": 2, 00:11:11.177 "bdev_name": "Malloc3", 00:11:11.177 "name": "Malloc3", 00:11:11.177 "nguid": "917827A06BFC441F82E86BBB2347CF6D", 00:11:11.177 "uuid": "917827a0-6bfc-441f-82e8-6bbb2347cf6d" 00:11:11.177 } 00:11:11.177 ] 00:11:11.177 }, 00:11:11.177 { 00:11:11.177 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:11.177 "subtype": "NVMe", 00:11:11.177 "listen_addresses": [ 00:11:11.177 { 00:11:11.177 "trtype": "VFIOUSER", 00:11:11.177 "adrfam": "IPv4", 00:11:11.177 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:11.177 "trsvcid": "0" 00:11:11.177 } 00:11:11.177 ], 00:11:11.177 "allow_any_host": true, 00:11:11.177 "hosts": [], 00:11:11.177 "serial_number": "SPDK2", 00:11:11.177 "model_number": "SPDK bdev Controller", 00:11:11.177 "max_namespaces": 32, 00:11:11.177 "min_cntlid": 1, 00:11:11.177 "max_cntlid": 65519, 00:11:11.177 "namespaces": [ 00:11:11.177 { 00:11:11.177 "nsid": 1, 00:11:11.177 "bdev_name": "Malloc2", 00:11:11.177 "name": "Malloc2", 00:11:11.177 "nguid": "FB352088C2B649B0A0F59A2106B1D60B", 00:11:11.177 "uuid": "fb352088-c2b6-49b0-a0f5-9a2106b1d60b" 00:11:11.177 } 00:11:11.177 ] 00:11:11.177 } 00:11:11.177 ] 00:11:11.177 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 5127 00:11:11.177 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:11.177 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:11.177 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:11.177 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:11.177 [2024-07-15 16:53:17.734899] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:11:11.177 [2024-07-15 16:53:17.734933] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid5337 ] 00:11:11.177 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.177 [2024-07-15 16:53:17.764622] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:11.177 [2024-07-15 16:53:17.774949] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:11.177 [2024-07-15 16:53:17.774970] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7286887000 00:11:11.177 [2024-07-15 16:53:17.775946] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:11.177 [2024-07-15 16:53:17.776957] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:11.177 [2024-07-15 16:53:17.777965] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:11.177 [2024-07-15 16:53:17.778967] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:11.177 [2024-07-15 16:53:17.779973] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:11.177 [2024-07-15 16:53:17.780980] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:11.177 [2024-07-15 16:53:17.781982] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:11.177 [2024-07-15 16:53:17.782989] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:11.177 [2024-07-15 16:53:17.783995] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:11.178 [2024-07-15 16:53:17.784004] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f728687c000 00:11:11.178 [2024-07-15 16:53:17.784945] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:11.178 [2024-07-15 16:53:17.797482] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:11.178 [2024-07-15 16:53:17.797505] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:11.178 [2024-07-15 16:53:17.799565] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:11.178 [2024-07-15 16:53:17.799602] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:11.178 [2024-07-15 16:53:17.799670] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:11.178 [2024-07-15 16:53:17.799685] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:11.178 [2024-07-15 16:53:17.799690] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:11.178 [2024-07-15 16:53:17.800573] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:11.178 [2024-07-15 16:53:17.800582] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:11.178 [2024-07-15 16:53:17.800589] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:11.178 [2024-07-15 16:53:17.801574] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:11.178 [2024-07-15 16:53:17.801583] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:11.178 [2024-07-15 16:53:17.801592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:11.178 [2024-07-15 16:53:17.802584] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:11.178 [2024-07-15 16:53:17.802592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:11.178 [2024-07-15 16:53:17.803591] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:11.178 [2024-07-15 16:53:17.803599] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:11.178 [2024-07-15 16:53:17.803603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:11.178 [2024-07-15 16:53:17.803609] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:11.178 [2024-07-15 16:53:17.803715] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:11.178 [2024-07-15 16:53:17.803719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:11.178 [2024-07-15 16:53:17.803723] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:11.178 [2024-07-15 16:53:17.804595] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:11.178 [2024-07-15 16:53:17.805599] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:11.178 [2024-07-15 16:53:17.806612] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:11.178 [2024-07-15 16:53:17.807618] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:11.178 [2024-07-15 16:53:17.807655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:11.178 [2024-07-15 16:53:17.808627] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:11.178 [2024-07-15 16:53:17.808635] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:11.178 [2024-07-15 16:53:17.808640] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:11.178 [2024-07-15 16:53:17.808656] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:11.178 [2024-07-15 16:53:17.808664] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:11.178 [2024-07-15 16:53:17.808674] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:11.178 [2024-07-15 16:53:17.808678] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:11.178 [2024-07-15 16:53:17.808689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:11.178 [2024-07-15 16:53:17.815235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:11.178 [2024-07-15 16:53:17.815246] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:11.178 [2024-07-15 16:53:17.815255] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:11.178 [2024-07-15 16:53:17.815260] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:11.178 [2024-07-15 16:53:17.815264] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:11.178 [2024-07-15 16:53:17.815268] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:11.178 [2024-07-15 16:53:17.815272] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:11.178 [2024-07-15 16:53:17.815276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:11.178 [2024-07-15 16:53:17.815283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:11.178 [2024-07-15 16:53:17.815292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:11.178 [2024-07-15 16:53:17.823232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:11.178 [2024-07-15 16:53:17.823246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.178 [2024-07-15 16:53:17.823254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.178 [2024-07-15 16:53:17.823261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.178 [2024-07-15 16:53:17.823268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.178 [2024-07-15 16:53:17.823272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:11.178 [2024-07-15 16:53:17.823280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:11.178 [2024-07-15 16:53:17.823288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:11.178 [2024-07-15 16:53:17.831231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:11.178 [2024-07-15 16:53:17.831239] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:11.178 [2024-07-15 16:53:17.831243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:11.178 [2024-07-15 16:53:17.831249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:11.178 [2024-07-15 16:53:17.831254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:11.178 [2024-07-15 16:53:17.831262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:11.178 [2024-07-15 16:53:17.839229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:11.178 [2024-07-15 16:53:17.839282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:11.178 [2024-07-15 16:53:17.839290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:11.178 [2024-07-15 16:53:17.839299] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:11.178 [2024-07-15 16:53:17.839303] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:11.178 [2024-07-15 16:53:17.839309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:11.436 [2024-07-15 16:53:17.847230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:11.436 [2024-07-15 16:53:17.847242] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:11.436 [2024-07-15 16:53:17.847251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:11.436 [2024-07-15 16:53:17.847258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:11.436 [2024-07-15 16:53:17.847265] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:11.436 [2024-07-15 16:53:17.847268] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:11.436 [2024-07-15 16:53:17.847275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:11.436 [2024-07-15 16:53:17.855232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:11.436 [2024-07-15 16:53:17.855245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:11.436 [2024-07-15 16:53:17.855252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:11.436 [2024-07-15 16:53:17.855259] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:11.436 [2024-07-15 16:53:17.855263] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:11.436 [2024-07-15 16:53:17.855269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:11.436 [2024-07-15 16:53:17.863231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:11.436 [2024-07-15 16:53:17.863241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:11.436 [2024-07-15 16:53:17.863247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:11.436 [2024-07-15 16:53:17.863254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:11.436 [2024-07-15 16:53:17.863259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:11.436 [2024-07-15 16:53:17.863263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:11.436 [2024-07-15 16:53:17.863268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:11.436 [2024-07-15 16:53:17.863272] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:11.436 [2024-07-15 16:53:17.863276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:11.436 [2024-07-15 16:53:17.863285] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:11.436 [2024-07-15 16:53:17.863300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:11.436 [2024-07-15 16:53:17.871231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:11.436 [2024-07-15 16:53:17.871243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:11.436 [2024-07-15 16:53:17.879232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:11.436 [2024-07-15 16:53:17.879243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:11.436 [2024-07-15 16:53:17.887230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:11.436 [2024-07-15 16:53:17.887242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:11.436 [2024-07-15 16:53:17.895231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:11.436 [2024-07-15 16:53:17.895246] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:11.436 [2024-07-15 16:53:17.895250] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:11.436 [2024-07-15 16:53:17.895253] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:11.436 [2024-07-15 16:53:17.895257] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:11.436 [2024-07-15 16:53:17.895262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:11.436 [2024-07-15 16:53:17.895269] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:11.436 [2024-07-15 16:53:17.895273] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:11.436 [2024-07-15 16:53:17.895278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:11.436 [2024-07-15 16:53:17.895284] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:11.436 [2024-07-15 16:53:17.895288] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:11.436 [2024-07-15 16:53:17.895294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:11.436 [2024-07-15 16:53:17.895300] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:11.436 [2024-07-15 16:53:17.895304] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:11.436 [2024-07-15 16:53:17.895309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:11.436 [2024-07-15 16:53:17.903231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:11.436 [2024-07-15 16:53:17.903244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:11.436 [2024-07-15 16:53:17.903254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:11.436 [2024-07-15 16:53:17.903260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:11.436 ===================================================== 00:11:11.436 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:11.436 ===================================================== 00:11:11.436 Controller Capabilities/Features 00:11:11.436 ================================ 00:11:11.436 Vendor ID: 4e58 00:11:11.436 Subsystem Vendor ID: 4e58 00:11:11.436 Serial Number: SPDK2 00:11:11.436 Model Number: SPDK bdev Controller 00:11:11.436 Firmware Version: 24.09 00:11:11.436 Recommended Arb Burst: 6 00:11:11.436 IEEE OUI Identifier: 8d 6b 50 00:11:11.436 Multi-path I/O 00:11:11.436 May have multiple subsystem ports: Yes 00:11:11.436 May have multiple controllers: Yes 00:11:11.436 Associated with SR-IOV VF: No 00:11:11.436 Max Data Transfer Size: 131072 00:11:11.436 Max Number of Namespaces: 32 00:11:11.436 Max Number of I/O Queues: 127 00:11:11.436 NVMe Specification Version (VS): 1.3 00:11:11.436 NVMe Specification Version (Identify): 1.3 00:11:11.436 Maximum Queue Entries: 256 00:11:11.436 Contiguous Queues Required: Yes 00:11:11.436 Arbitration Mechanisms Supported 00:11:11.436 Weighted Round Robin: Not Supported 00:11:11.436 Vendor Specific: Not Supported 00:11:11.436 Reset Timeout: 15000 ms 00:11:11.436 Doorbell Stride: 4 bytes 00:11:11.436 NVM Subsystem Reset: Not Supported 00:11:11.436 Command Sets Supported 00:11:11.436 NVM Command Set: Supported 00:11:11.436 Boot Partition: Not Supported 00:11:11.436 Memory Page Size Minimum: 4096 bytes 00:11:11.436 Memory Page Size Maximum: 4096 bytes 00:11:11.436 Persistent Memory Region: Not Supported 00:11:11.436 Optional Asynchronous Events Supported 00:11:11.436 Namespace Attribute Notices: Supported 00:11:11.436 Firmware Activation Notices: Not Supported 00:11:11.436 ANA Change Notices: Not Supported 00:11:11.436 PLE Aggregate Log Change Notices: Not Supported 00:11:11.436 LBA Status Info Alert Notices: Not Supported 00:11:11.436 EGE Aggregate Log Change Notices: Not Supported 00:11:11.436 Normal NVM Subsystem Shutdown event: Not Supported 00:11:11.436 Zone Descriptor Change Notices: Not Supported 00:11:11.436 Discovery Log Change Notices: Not Supported 00:11:11.436 Controller Attributes 00:11:11.436 128-bit Host Identifier: Supported 00:11:11.437 Non-Operational Permissive Mode: Not Supported 00:11:11.437 NVM Sets: Not Supported 00:11:11.437 Read Recovery Levels: Not Supported 00:11:11.437 Endurance Groups: Not Supported 00:11:11.437 Predictable Latency Mode: Not Supported 00:11:11.437 Traffic Based Keep ALive: Not Supported 00:11:11.437 Namespace Granularity: Not Supported 00:11:11.437 SQ Associations: Not Supported 00:11:11.437 UUID List: Not Supported 00:11:11.437 Multi-Domain Subsystem: Not Supported 00:11:11.437 Fixed Capacity Management: Not Supported 00:11:11.437 Variable Capacity Management: Not Supported 00:11:11.437 Delete Endurance Group: Not Supported 00:11:11.437 Delete NVM Set: Not Supported 00:11:11.437 Extended LBA Formats Supported: Not Supported 00:11:11.437 Flexible Data Placement Supported: Not Supported 00:11:11.437 00:11:11.437 Controller Memory Buffer Support 00:11:11.437 ================================ 00:11:11.437 Supported: No 00:11:11.437 00:11:11.437 Persistent Memory Region Support 00:11:11.437 ================================ 00:11:11.437 Supported: No 00:11:11.437 00:11:11.437 Admin Command Set Attributes 00:11:11.437 ============================ 00:11:11.437 Security Send/Receive: Not Supported 00:11:11.437 Format NVM: Not Supported 00:11:11.437 Firmware Activate/Download: Not Supported 00:11:11.437 Namespace Management: Not Supported 00:11:11.437 Device Self-Test: Not Supported 00:11:11.437 Directives: Not Supported 00:11:11.437 NVMe-MI: Not Supported 00:11:11.437 Virtualization Management: Not Supported 00:11:11.437 Doorbell Buffer Config: Not Supported 00:11:11.437 Get LBA Status Capability: Not Supported 00:11:11.437 Command & Feature Lockdown Capability: Not Supported 00:11:11.437 Abort Command Limit: 4 00:11:11.437 Async Event Request Limit: 4 00:11:11.437 Number of Firmware Slots: N/A 00:11:11.437 Firmware Slot 1 Read-Only: N/A 00:11:11.437 Firmware Activation Without Reset: N/A 00:11:11.437 Multiple Update Detection Support: N/A 00:11:11.437 Firmware Update Granularity: No Information Provided 00:11:11.437 Per-Namespace SMART Log: No 00:11:11.437 Asymmetric Namespace Access Log Page: Not Supported 00:11:11.437 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:11.437 Command Effects Log Page: Supported 00:11:11.437 Get Log Page Extended Data: Supported 00:11:11.437 Telemetry Log Pages: Not Supported 00:11:11.437 Persistent Event Log Pages: Not Supported 00:11:11.437 Supported Log Pages Log Page: May Support 00:11:11.437 Commands Supported & Effects Log Page: Not Supported 00:11:11.437 Feature Identifiers & Effects Log Page:May Support 00:11:11.437 NVMe-MI Commands & Effects Log Page: May Support 00:11:11.437 Data Area 4 for Telemetry Log: Not Supported 00:11:11.437 Error Log Page Entries Supported: 128 00:11:11.437 Keep Alive: Supported 00:11:11.437 Keep Alive Granularity: 10000 ms 00:11:11.437 00:11:11.437 NVM Command Set Attributes 00:11:11.437 ========================== 00:11:11.437 Submission Queue Entry Size 00:11:11.437 Max: 64 00:11:11.437 Min: 64 00:11:11.437 Completion Queue Entry Size 00:11:11.437 Max: 16 00:11:11.437 Min: 16 00:11:11.437 Number of Namespaces: 32 00:11:11.437 Compare Command: Supported 00:11:11.437 Write Uncorrectable Command: Not Supported 00:11:11.437 Dataset Management Command: Supported 00:11:11.437 Write Zeroes Command: Supported 00:11:11.437 Set Features Save Field: Not Supported 00:11:11.437 Reservations: Not Supported 00:11:11.437 Timestamp: Not Supported 00:11:11.437 Copy: Supported 00:11:11.437 Volatile Write Cache: Present 00:11:11.437 Atomic Write Unit (Normal): 1 00:11:11.437 Atomic Write Unit (PFail): 1 00:11:11.437 Atomic Compare & Write Unit: 1 00:11:11.437 Fused Compare & Write: Supported 00:11:11.437 Scatter-Gather List 00:11:11.437 SGL Command Set: Supported (Dword aligned) 00:11:11.437 SGL Keyed: Not Supported 00:11:11.437 SGL Bit Bucket Descriptor: Not Supported 00:11:11.437 SGL Metadata Pointer: Not Supported 00:11:11.437 Oversized SGL: Not Supported 00:11:11.437 SGL Metadata Address: Not Supported 00:11:11.437 SGL Offset: Not Supported 00:11:11.437 Transport SGL Data Block: Not Supported 00:11:11.437 Replay Protected Memory Block: Not Supported 00:11:11.437 00:11:11.437 Firmware Slot Information 00:11:11.437 ========================= 00:11:11.437 Active slot: 1 00:11:11.437 Slot 1 Firmware Revision: 24.09 00:11:11.437 00:11:11.437 00:11:11.437 Commands Supported and Effects 00:11:11.437 ============================== 00:11:11.437 Admin Commands 00:11:11.437 -------------- 00:11:11.437 Get Log Page (02h): Supported 00:11:11.437 Identify (06h): Supported 00:11:11.437 Abort (08h): Supported 00:11:11.437 Set Features (09h): Supported 00:11:11.437 Get Features (0Ah): Supported 00:11:11.437 Asynchronous Event Request (0Ch): Supported 00:11:11.437 Keep Alive (18h): Supported 00:11:11.437 I/O Commands 00:11:11.437 ------------ 00:11:11.437 Flush (00h): Supported LBA-Change 00:11:11.437 Write (01h): Supported LBA-Change 00:11:11.437 Read (02h): Supported 00:11:11.437 Compare (05h): Supported 00:11:11.437 Write Zeroes (08h): Supported LBA-Change 00:11:11.437 Dataset Management (09h): Supported LBA-Change 00:11:11.437 Copy (19h): Supported LBA-Change 00:11:11.437 00:11:11.437 Error Log 00:11:11.437 ========= 00:11:11.437 00:11:11.437 Arbitration 00:11:11.437 =========== 00:11:11.437 Arbitration Burst: 1 00:11:11.437 00:11:11.437 Power Management 00:11:11.437 ================ 00:11:11.437 Number of Power States: 1 00:11:11.437 Current Power State: Power State #0 00:11:11.437 Power State #0: 00:11:11.437 Max Power: 0.00 W 00:11:11.437 Non-Operational State: Operational 00:11:11.437 Entry Latency: Not Reported 00:11:11.437 Exit Latency: Not Reported 00:11:11.437 Relative Read Throughput: 0 00:11:11.437 Relative Read Latency: 0 00:11:11.437 Relative Write Throughput: 0 00:11:11.437 Relative Write Latency: 0 00:11:11.437 Idle Power: Not Reported 00:11:11.437 Active Power: Not Reported 00:11:11.437 Non-Operational Permissive Mode: Not Supported 00:11:11.437 00:11:11.437 Health Information 00:11:11.437 ================== 00:11:11.437 Critical Warnings: 00:11:11.437 Available Spare Space: OK 00:11:11.437 Temperature: OK 00:11:11.437 Device Reliability: OK 00:11:11.437 Read Only: No 00:11:11.437 Volatile Memory Backup: OK 00:11:11.437 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:11.437 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:11.437 Available Spare: 0% 00:11:11.437 Available Sp[2024-07-15 16:53:17.903349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:11.437 [2024-07-15 16:53:17.911230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:11.437 [2024-07-15 16:53:17.911262] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:11.437 [2024-07-15 16:53:17.911270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.437 [2024-07-15 16:53:17.911275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.437 [2024-07-15 16:53:17.911281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.437 [2024-07-15 16:53:17.911286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.437 [2024-07-15 16:53:17.911336] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:11.437 [2024-07-15 16:53:17.911347] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:11.437 [2024-07-15 16:53:17.912341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:11.437 [2024-07-15 16:53:17.912384] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:11.437 [2024-07-15 16:53:17.912390] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:11.437 [2024-07-15 16:53:17.913343] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:11.437 [2024-07-15 16:53:17.913354] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:11.437 [2024-07-15 16:53:17.913401] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:11.437 [2024-07-15 16:53:17.916231] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:11.437 are Threshold: 0% 00:11:11.437 Life Percentage Used: 0% 00:11:11.437 Data Units Read: 0 00:11:11.437 Data Units Written: 0 00:11:11.437 Host Read Commands: 0 00:11:11.437 Host Write Commands: 0 00:11:11.437 Controller Busy Time: 0 minutes 00:11:11.437 Power Cycles: 0 00:11:11.437 Power On Hours: 0 hours 00:11:11.437 Unsafe Shutdowns: 0 00:11:11.437 Unrecoverable Media Errors: 0 00:11:11.437 Lifetime Error Log Entries: 0 00:11:11.437 Warning Temperature Time: 0 minutes 00:11:11.437 Critical Temperature Time: 0 minutes 00:11:11.437 00:11:11.437 Number of Queues 00:11:11.437 ================ 00:11:11.437 Number of I/O Submission Queues: 127 00:11:11.437 Number of I/O Completion Queues: 127 00:11:11.437 00:11:11.437 Active Namespaces 00:11:11.437 ================= 00:11:11.437 Namespace ID:1 00:11:11.437 Error Recovery Timeout: Unlimited 00:11:11.437 Command Set Identifier: NVM (00h) 00:11:11.437 Deallocate: Supported 00:11:11.437 Deallocated/Unwritten Error: Not Supported 00:11:11.438 Deallocated Read Value: Unknown 00:11:11.438 Deallocate in Write Zeroes: Not Supported 00:11:11.438 Deallocated Guard Field: 0xFFFF 00:11:11.438 Flush: Supported 00:11:11.438 Reservation: Supported 00:11:11.438 Namespace Sharing Capabilities: Multiple Controllers 00:11:11.438 Size (in LBAs): 131072 (0GiB) 00:11:11.438 Capacity (in LBAs): 131072 (0GiB) 00:11:11.438 Utilization (in LBAs): 131072 (0GiB) 00:11:11.438 NGUID: FB352088C2B649B0A0F59A2106B1D60B 00:11:11.438 UUID: fb352088-c2b6-49b0-a0f5-9a2106b1d60b 00:11:11.438 Thin Provisioning: Not Supported 00:11:11.438 Per-NS Atomic Units: Yes 00:11:11.438 Atomic Boundary Size (Normal): 0 00:11:11.438 Atomic Boundary Size (PFail): 0 00:11:11.438 Atomic Boundary Offset: 0 00:11:11.438 Maximum Single Source Range Length: 65535 00:11:11.438 Maximum Copy Length: 65535 00:11:11.438 Maximum Source Range Count: 1 00:11:11.438 NGUID/EUI64 Never Reused: No 00:11:11.438 Namespace Write Protected: No 00:11:11.438 Number of LBA Formats: 1 00:11:11.438 Current LBA Format: LBA Format #00 00:11:11.438 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:11.438 00:11:11.438 16:53:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:11.438 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.695 [2024-07-15 16:53:18.129537] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:16.955 Initializing NVMe Controllers 00:11:16.955 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:16.955 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:16.955 Initialization complete. Launching workers. 00:11:16.955 ======================================================== 00:11:16.955 Latency(us) 00:11:16.955 Device Information : IOPS MiB/s Average min max 00:11:16.955 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39896.67 155.85 3208.10 956.59 7424.99 00:11:16.955 ======================================================== 00:11:16.955 Total : 39896.67 155.85 3208.10 956.59 7424.99 00:11:16.955 00:11:16.955 [2024-07-15 16:53:23.239486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:16.955 16:53:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:16.955 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.955 [2024-07-15 16:53:23.458153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:22.215 Initializing NVMe Controllers 00:11:22.215 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:22.215 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:22.215 Initialization complete. Launching workers. 00:11:22.215 ======================================================== 00:11:22.215 Latency(us) 00:11:22.215 Device Information : IOPS MiB/s Average min max 00:11:22.215 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39978.99 156.17 3201.87 990.11 6649.45 00:11:22.215 ======================================================== 00:11:22.215 Total : 39978.99 156.17 3201.87 990.11 6649.45 00:11:22.215 00:11:22.215 [2024-07-15 16:53:28.479583] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:22.215 16:53:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:22.215 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.215 [2024-07-15 16:53:28.679003] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:27.478 [2024-07-15 16:53:33.819317] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:27.478 Initializing NVMe Controllers 00:11:27.478 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:27.478 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:27.478 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:27.478 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:27.478 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:27.478 Initialization complete. Launching workers. 00:11:27.478 Starting thread on core 2 00:11:27.478 Starting thread on core 3 00:11:27.478 Starting thread on core 1 00:11:27.478 16:53:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:27.478 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.478 [2024-07-15 16:53:34.103691] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:30.759 [2024-07-15 16:53:37.192629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:30.759 Initializing NVMe Controllers 00:11:30.759 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:30.759 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:30.759 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:30.759 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:30.759 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:30.759 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:30.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:30.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:30.759 Initialization complete. Launching workers. 00:11:30.759 Starting thread on core 1 with urgent priority queue 00:11:30.759 Starting thread on core 2 with urgent priority queue 00:11:30.759 Starting thread on core 3 with urgent priority queue 00:11:30.759 Starting thread on core 0 with urgent priority queue 00:11:30.759 SPDK bdev Controller (SPDK2 ) core 0: 8700.00 IO/s 11.49 secs/100000 ios 00:11:30.759 SPDK bdev Controller (SPDK2 ) core 1: 7641.33 IO/s 13.09 secs/100000 ios 00:11:30.759 SPDK bdev Controller (SPDK2 ) core 2: 7617.33 IO/s 13.13 secs/100000 ios 00:11:30.759 SPDK bdev Controller (SPDK2 ) core 3: 9381.67 IO/s 10.66 secs/100000 ios 00:11:30.759 ======================================================== 00:11:30.759 00:11:30.759 16:53:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:30.759 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.066 [2024-07-15 16:53:37.462736] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:31.066 Initializing NVMe Controllers 00:11:31.066 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:31.066 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:31.066 Namespace ID: 1 size: 0GB 00:11:31.066 Initialization complete. 00:11:31.066 INFO: using host memory buffer for IO 00:11:31.066 Hello world! 00:11:31.066 [2024-07-15 16:53:37.474818] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:31.066 16:53:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:31.066 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.324 [2024-07-15 16:53:37.744085] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:32.259 Initializing NVMe Controllers 00:11:32.259 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:32.259 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:32.259 Initialization complete. Launching workers. 00:11:32.259 submit (in ns) avg, min, max = 8381.9, 3231.3, 4000233.0 00:11:32.259 complete (in ns) avg, min, max = 20428.9, 1775.7, 5991990.4 00:11:32.259 00:11:32.259 Submit histogram 00:11:32.259 ================ 00:11:32.259 Range in us Cumulative Count 00:11:32.259 3.228 - 3.242: 0.0247% ( 4) 00:11:32.259 3.242 - 3.256: 0.0557% ( 5) 00:11:32.259 3.256 - 3.270: 0.0804% ( 4) 00:11:32.259 3.270 - 3.283: 0.1113% ( 5) 00:11:32.259 3.283 - 3.297: 0.6556% ( 88) 00:11:32.259 3.297 - 3.311: 2.5235% ( 302) 00:11:32.259 3.311 - 3.325: 6.0119% ( 564) 00:11:32.259 3.325 - 3.339: 9.8899% ( 627) 00:11:32.259 3.339 - 3.353: 14.3741% ( 725) 00:11:32.259 3.353 - 3.367: 20.0520% ( 918) 00:11:32.259 3.367 - 3.381: 25.2474% ( 840) 00:11:32.259 3.381 - 3.395: 30.9129% ( 916) 00:11:32.259 3.395 - 3.409: 37.3268% ( 1037) 00:11:32.259 3.409 - 3.423: 41.7986% ( 723) 00:11:32.259 3.423 - 3.437: 46.0725% ( 691) 00:11:32.259 3.437 - 3.450: 51.8370% ( 932) 00:11:32.259 3.450 - 3.464: 57.4406% ( 906) 00:11:32.259 3.464 - 3.478: 62.0609% ( 747) 00:11:32.259 3.478 - 3.492: 66.1183% ( 656) 00:11:32.259 3.492 - 3.506: 71.5673% ( 881) 00:11:32.259 3.506 - 3.520: 76.2123% ( 751) 00:11:32.259 3.520 - 3.534: 79.5893% ( 546) 00:11:32.259 3.534 - 3.548: 82.3231% ( 442) 00:11:32.259 3.548 - 3.562: 84.5126% ( 354) 00:11:32.259 3.562 - 3.590: 87.0238% ( 406) 00:11:32.259 3.590 - 3.617: 88.3906% ( 221) 00:11:32.259 3.617 - 3.645: 89.7328% ( 217) 00:11:32.259 3.645 - 3.673: 91.3038% ( 254) 00:11:32.259 3.673 - 3.701: 93.0418% ( 281) 00:11:32.259 3.701 - 3.729: 94.8911% ( 299) 00:11:32.259 3.729 - 3.757: 96.4250% ( 248) 00:11:32.259 3.757 - 3.784: 97.6497% ( 198) 00:11:32.259 3.784 - 3.812: 98.4908% ( 136) 00:11:32.259 3.812 - 3.840: 98.9547% ( 75) 00:11:32.259 3.840 - 3.868: 99.3382% ( 62) 00:11:32.259 3.868 - 3.896: 99.4619% ( 20) 00:11:32.259 3.896 - 3.923: 99.5485% ( 14) 00:11:32.259 3.923 - 3.951: 99.5609% ( 2) 00:11:32.259 3.951 - 3.979: 99.5794% ( 3) 00:11:32.259 3.979 - 4.007: 99.5856% ( 1) 00:11:32.259 4.118 - 4.146: 99.5918% ( 1) 00:11:32.259 5.203 - 5.231: 99.5980% ( 1) 00:11:32.259 5.370 - 5.398: 99.6042% ( 1) 00:11:32.259 5.398 - 5.426: 99.6103% ( 1) 00:11:32.259 5.426 - 5.454: 99.6165% ( 1) 00:11:32.259 5.454 - 5.482: 99.6227% ( 1) 00:11:32.259 5.482 - 5.510: 99.6289% ( 1) 00:11:32.259 5.510 - 5.537: 99.6351% ( 1) 00:11:32.259 5.565 - 5.593: 99.6475% ( 2) 00:11:32.259 5.593 - 5.621: 99.6598% ( 2) 00:11:32.259 5.760 - 5.788: 99.6722% ( 2) 00:11:32.259 5.983 - 6.010: 99.6784% ( 1) 00:11:32.259 6.233 - 6.261: 99.6846% ( 1) 00:11:32.259 6.511 - 6.539: 99.6907% ( 1) 00:11:32.259 6.539 - 6.567: 99.6969% ( 1) 00:11:32.259 6.595 - 6.623: 99.7093% ( 2) 00:11:32.259 6.873 - 6.901: 99.7155% ( 1) 00:11:32.259 6.957 - 6.984: 99.7217% ( 1) 00:11:32.259 6.984 - 7.012: 99.7340% ( 2) 00:11:32.259 7.068 - 7.096: 99.7464% ( 2) 00:11:32.259 7.123 - 7.179: 99.7588% ( 2) 00:11:32.259 7.235 - 7.290: 99.7650% ( 1) 00:11:32.259 7.402 - 7.457: 99.7712% ( 1) 00:11:32.259 7.457 - 7.513: 99.7897% ( 3) 00:11:32.259 7.513 - 7.569: 99.7959% ( 1) 00:11:32.259 7.736 - 7.791: 99.8021% ( 1) 00:11:32.259 7.903 - 7.958: 99.8083% ( 1) 00:11:32.259 8.070 - 8.125: 99.8268% ( 3) 00:11:32.259 8.348 - 8.403: 99.8330% ( 1) 00:11:32.259 8.403 - 8.459: 99.8392% ( 1) 00:11:32.259 8.515 - 8.570: 99.8454% ( 1) 00:11:32.259 8.793 - 8.849: 99.8516% ( 1) 00:11:32.259 10.463 - 10.518: 99.8577% ( 1) 00:11:32.259 11.854 - 11.910: 99.8639% ( 1) 00:11:32.259 13.746 - 13.802: 99.8701% ( 1) 00:11:32.259 19.256 - 19.367: 99.8763% ( 1) 00:11:32.259 3419.270 - 3433.517: 99.8825% ( 1) 00:11:32.259 3989.148 - 4017.642: 100.0000% ( 19) 00:11:32.259 00:11:32.259 Complete histogram 00:11:32.259 ================== 00:11:32.259 Ra[2024-07-15 16:53:38.838292] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:32.259 nge in us Cumulative Count 00:11:32.259 1.774 - 1.781: 0.0124% ( 2) 00:11:32.259 1.781 - 1.795: 0.0371% ( 4) 00:11:32.259 1.795 - 1.809: 0.0742% ( 6) 00:11:32.259 1.809 - 1.823: 1.4164% ( 217) 00:11:32.259 1.823 - 1.837: 4.5708% ( 510) 00:11:32.259 1.837 - 1.850: 6.3830% ( 293) 00:11:32.259 1.850 - 1.864: 11.0589% ( 756) 00:11:32.259 1.864 - 1.878: 56.5129% ( 7349) 00:11:32.259 1.878 - 1.892: 89.8070% ( 5383) 00:11:32.259 1.892 - 1.906: 94.0314% ( 683) 00:11:32.259 1.906 - 1.920: 95.4231% ( 225) 00:11:32.259 1.920 - 1.934: 96.0416% ( 100) 00:11:32.259 1.934 - 1.948: 97.4208% ( 223) 00:11:32.259 1.948 - 1.962: 98.6331% ( 196) 00:11:32.259 1.962 - 1.976: 99.1155% ( 78) 00:11:32.259 1.976 - 1.990: 99.2269% ( 18) 00:11:32.259 1.990 - 2.003: 99.2516% ( 4) 00:11:32.259 2.003 - 2.017: 99.2763% ( 4) 00:11:32.259 2.031 - 2.045: 99.2825% ( 1) 00:11:32.259 2.045 - 2.059: 99.2949% ( 2) 00:11:32.259 2.059 - 2.073: 99.3073% ( 2) 00:11:32.259 2.073 - 2.087: 99.3135% ( 1) 00:11:32.259 2.101 - 2.115: 99.3196% ( 1) 00:11:32.259 2.212 - 2.226: 99.3258% ( 1) 00:11:32.259 2.282 - 2.296: 99.3320% ( 1) 00:11:32.259 2.310 - 2.323: 99.3382% ( 1) 00:11:32.259 2.407 - 2.421: 99.3444% ( 1) 00:11:32.259 3.590 - 3.617: 99.3506% ( 1) 00:11:32.259 3.868 - 3.896: 99.3691% ( 3) 00:11:32.259 3.923 - 3.951: 99.3753% ( 1) 00:11:32.259 4.090 - 4.118: 99.3815% ( 1) 00:11:32.259 4.842 - 4.870: 99.3877% ( 1) 00:11:32.259 4.925 - 4.953: 99.3939% ( 1) 00:11:32.259 5.092 - 5.120: 99.4000% ( 1) 00:11:32.259 5.203 - 5.231: 99.4062% ( 1) 00:11:32.259 5.231 - 5.259: 99.4124% ( 1) 00:11:32.259 5.259 - 5.287: 99.4186% ( 1) 00:11:32.259 5.370 - 5.398: 99.4248% ( 1) 00:11:32.259 5.398 - 5.426: 99.4310% ( 1) 00:11:32.259 5.537 - 5.565: 99.4433% ( 2) 00:11:32.259 5.927 - 5.955: 99.4495% ( 1) 00:11:32.259 6.038 - 6.066: 99.4557% ( 1) 00:11:32.259 6.233 - 6.261: 99.4619% ( 1) 00:11:32.259 6.372 - 6.400: 99.4681% ( 1) 00:11:32.259 6.428 - 6.456: 99.4743% ( 1) 00:11:32.259 6.817 - 6.845: 99.4805% ( 1) 00:11:32.259 7.040 - 7.068: 99.4866% ( 1) 00:11:32.260 7.290 - 7.346: 99.4928% ( 1) 00:11:32.260 7.457 - 7.513: 99.5052% ( 2) 00:11:32.260 7.569 - 7.624: 99.5114% ( 1) 00:11:32.260 10.129 - 10.184: 99.5176% ( 1) 00:11:32.260 12.299 - 12.355: 99.5238% ( 1) 00:11:32.260 15.249 - 15.360: 99.5299% ( 1) 00:11:32.260 158.497 - 159.388: 99.5361% ( 1) 00:11:32.260 2023.068 - 2037.315: 99.5423% ( 1) 00:11:32.260 3989.148 - 4017.642: 99.9938% ( 73) 00:11:32.260 5983.722 - 6012.216: 100.0000% ( 1) 00:11:32.260 00:11:32.260 16:53:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:32.260 16:53:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:32.260 16:53:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:32.260 16:53:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:32.260 16:53:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:32.517 [ 00:11:32.517 { 00:11:32.518 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:32.518 "subtype": "Discovery", 00:11:32.518 "listen_addresses": [], 00:11:32.518 "allow_any_host": true, 00:11:32.518 "hosts": [] 00:11:32.518 }, 00:11:32.518 { 00:11:32.518 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:32.518 "subtype": "NVMe", 00:11:32.518 "listen_addresses": [ 00:11:32.518 { 00:11:32.518 "trtype": "VFIOUSER", 00:11:32.518 "adrfam": "IPv4", 00:11:32.518 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:32.518 "trsvcid": "0" 00:11:32.518 } 00:11:32.518 ], 00:11:32.518 "allow_any_host": true, 00:11:32.518 "hosts": [], 00:11:32.518 "serial_number": "SPDK1", 00:11:32.518 "model_number": "SPDK bdev Controller", 00:11:32.518 "max_namespaces": 32, 00:11:32.518 "min_cntlid": 1, 00:11:32.518 "max_cntlid": 65519, 00:11:32.518 "namespaces": [ 00:11:32.518 { 00:11:32.518 "nsid": 1, 00:11:32.518 "bdev_name": "Malloc1", 00:11:32.518 "name": "Malloc1", 00:11:32.518 "nguid": "3A17FD44A6CB4C0E888CAAA300E4965F", 00:11:32.518 "uuid": "3a17fd44-a6cb-4c0e-888c-aaa300e4965f" 00:11:32.518 }, 00:11:32.518 { 00:11:32.518 "nsid": 2, 00:11:32.518 "bdev_name": "Malloc3", 00:11:32.518 "name": "Malloc3", 00:11:32.518 "nguid": "917827A06BFC441F82E86BBB2347CF6D", 00:11:32.518 "uuid": "917827a0-6bfc-441f-82e8-6bbb2347cf6d" 00:11:32.518 } 00:11:32.518 ] 00:11:32.518 }, 00:11:32.518 { 00:11:32.518 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:32.518 "subtype": "NVMe", 00:11:32.518 "listen_addresses": [ 00:11:32.518 { 00:11:32.518 "trtype": "VFIOUSER", 00:11:32.518 "adrfam": "IPv4", 00:11:32.518 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:32.518 "trsvcid": "0" 00:11:32.518 } 00:11:32.518 ], 00:11:32.518 "allow_any_host": true, 00:11:32.518 "hosts": [], 00:11:32.518 "serial_number": "SPDK2", 00:11:32.518 "model_number": "SPDK bdev Controller", 00:11:32.518 "max_namespaces": 32, 00:11:32.518 "min_cntlid": 1, 00:11:32.518 "max_cntlid": 65519, 00:11:32.518 "namespaces": [ 00:11:32.518 { 00:11:32.518 "nsid": 1, 00:11:32.518 "bdev_name": "Malloc2", 00:11:32.518 "name": "Malloc2", 00:11:32.518 "nguid": "FB352088C2B649B0A0F59A2106B1D60B", 00:11:32.518 "uuid": "fb352088-c2b6-49b0-a0f5-9a2106b1d60b" 00:11:32.518 } 00:11:32.518 ] 00:11:32.518 } 00:11:32.518 ] 00:11:32.518 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:32.518 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:32.518 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=8804 00:11:32.518 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:32.518 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:32.518 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:32.518 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:32.518 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:32.518 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:32.518 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:32.518 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.776 [2024-07-15 16:53:39.199032] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:32.776 Malloc4 00:11:32.776 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:32.776 [2024-07-15 16:53:39.432796] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:33.034 Asynchronous Event Request test 00:11:33.034 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:33.034 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:33.034 Registering asynchronous event callbacks... 00:11:33.034 Starting namespace attribute notice tests for all controllers... 00:11:33.034 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:33.034 aer_cb - Changed Namespace 00:11:33.034 Cleaning up... 00:11:33.034 [ 00:11:33.034 { 00:11:33.034 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:33.034 "subtype": "Discovery", 00:11:33.034 "listen_addresses": [], 00:11:33.034 "allow_any_host": true, 00:11:33.034 "hosts": [] 00:11:33.034 }, 00:11:33.034 { 00:11:33.034 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:33.034 "subtype": "NVMe", 00:11:33.034 "listen_addresses": [ 00:11:33.034 { 00:11:33.034 "trtype": "VFIOUSER", 00:11:33.034 "adrfam": "IPv4", 00:11:33.034 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:33.034 "trsvcid": "0" 00:11:33.034 } 00:11:33.034 ], 00:11:33.034 "allow_any_host": true, 00:11:33.034 "hosts": [], 00:11:33.034 "serial_number": "SPDK1", 00:11:33.034 "model_number": "SPDK bdev Controller", 00:11:33.034 "max_namespaces": 32, 00:11:33.034 "min_cntlid": 1, 00:11:33.034 "max_cntlid": 65519, 00:11:33.034 "namespaces": [ 00:11:33.034 { 00:11:33.034 "nsid": 1, 00:11:33.034 "bdev_name": "Malloc1", 00:11:33.034 "name": "Malloc1", 00:11:33.034 "nguid": "3A17FD44A6CB4C0E888CAAA300E4965F", 00:11:33.034 "uuid": "3a17fd44-a6cb-4c0e-888c-aaa300e4965f" 00:11:33.034 }, 00:11:33.034 { 00:11:33.034 "nsid": 2, 00:11:33.034 "bdev_name": "Malloc3", 00:11:33.034 "name": "Malloc3", 00:11:33.034 "nguid": "917827A06BFC441F82E86BBB2347CF6D", 00:11:33.034 "uuid": "917827a0-6bfc-441f-82e8-6bbb2347cf6d" 00:11:33.034 } 00:11:33.034 ] 00:11:33.034 }, 00:11:33.034 { 00:11:33.034 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:33.034 "subtype": "NVMe", 00:11:33.034 "listen_addresses": [ 00:11:33.034 { 00:11:33.034 "trtype": "VFIOUSER", 00:11:33.034 "adrfam": "IPv4", 00:11:33.034 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:33.034 "trsvcid": "0" 00:11:33.034 } 00:11:33.034 ], 00:11:33.034 "allow_any_host": true, 00:11:33.034 "hosts": [], 00:11:33.034 "serial_number": "SPDK2", 00:11:33.034 "model_number": "SPDK bdev Controller", 00:11:33.034 "max_namespaces": 32, 00:11:33.034 "min_cntlid": 1, 00:11:33.034 "max_cntlid": 65519, 00:11:33.034 "namespaces": [ 00:11:33.034 { 00:11:33.034 "nsid": 1, 00:11:33.034 "bdev_name": "Malloc2", 00:11:33.034 "name": "Malloc2", 00:11:33.034 "nguid": "FB352088C2B649B0A0F59A2106B1D60B", 00:11:33.034 "uuid": "fb352088-c2b6-49b0-a0f5-9a2106b1d60b" 00:11:33.034 }, 00:11:33.034 { 00:11:33.034 "nsid": 2, 00:11:33.034 "bdev_name": "Malloc4", 00:11:33.034 "name": "Malloc4", 00:11:33.034 "nguid": "F3000CDD07D04E90B9C76CC71181A3A7", 00:11:33.034 "uuid": "f3000cdd-07d0-4e90-b9c7-6cc71181a3a7" 00:11:33.034 } 00:11:33.034 ] 00:11:33.034 } 00:11:33.034 ] 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 8804 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 884 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 884 ']' 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 884 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 884 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 884' 00:11:33.034 killing process with pid 884 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 884 00:11:33.034 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 884 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=9036 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 9036' 00:11:33.293 Process pid: 9036 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 9036 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 9036 ']' 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:33.293 16:53:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:33.551 [2024-07-15 16:53:39.999073] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:33.551 [2024-07-15 16:53:39.999958] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:11:33.551 [2024-07-15 16:53:39.999996] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.551 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.551 [2024-07-15 16:53:40.054884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.551 [2024-07-15 16:53:40.135040] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.551 [2024-07-15 16:53:40.135081] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.551 [2024-07-15 16:53:40.135088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.551 [2024-07-15 16:53:40.135094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.551 [2024-07-15 16:53:40.135099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.551 [2024-07-15 16:53:40.135146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.551 [2024-07-15 16:53:40.135253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.551 [2024-07-15 16:53:40.135324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.551 [2024-07-15 16:53:40.135326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.551 [2024-07-15 16:53:40.213894] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:11:33.551 [2024-07-15 16:53:40.214024] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:11:33.551 [2024-07-15 16:53:40.214274] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:11:33.551 [2024-07-15 16:53:40.214612] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:33.551 [2024-07-15 16:53:40.214859] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:11:34.485 16:53:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.485 16:53:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:34.485 16:53:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:35.417 16:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:11:35.417 16:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:35.417 16:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:35.417 16:53:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:35.417 16:53:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:35.417 16:53:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:35.675 Malloc1 00:11:35.675 16:53:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:35.933 16:53:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:35.933 16:53:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:36.190 16:53:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:36.190 16:53:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:36.190 16:53:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:36.447 Malloc2 00:11:36.447 16:53:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:36.447 16:53:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:36.704 16:53:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:36.962 16:53:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:36.962 16:53:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 9036 00:11:36.962 16:53:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 9036 ']' 00:11:36.962 16:53:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 9036 00:11:36.962 16:53:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:11:36.962 16:53:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:36.962 16:53:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 9036 00:11:36.962 16:53:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:36.962 16:53:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:36.962 16:53:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 9036' 00:11:36.962 killing process with pid 9036 00:11:36.962 16:53:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 9036 00:11:36.962 16:53:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 9036 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:37.220 00:11:37.220 real 0m51.331s 00:11:37.220 user 3m23.206s 00:11:37.220 sys 0m3.677s 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:37.220 ************************************ 00:11:37.220 END TEST nvmf_vfio_user 00:11:37.220 ************************************ 00:11:37.220 16:53:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:37.220 16:53:43 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:37.220 16:53:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:37.220 16:53:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.220 16:53:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:37.220 ************************************ 00:11:37.220 START TEST nvmf_vfio_user_nvme_compliance 00:11:37.220 ************************************ 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:37.220 * Looking for test storage... 00:11:37.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.220 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.478 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:37.478 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=9810 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 9810' 00:11:37.479 Process pid: 9810 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 9810 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 9810 ']' 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:37.479 16:53:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:37.479 [2024-07-15 16:53:43.945235] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:11:37.479 [2024-07-15 16:53:43.945285] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.479 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.479 [2024-07-15 16:53:43.993968] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:37.479 [2024-07-15 16:53:44.067337] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.479 [2024-07-15 16:53:44.067375] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.479 [2024-07-15 16:53:44.067382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.479 [2024-07-15 16:53:44.067388] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.479 [2024-07-15 16:53:44.067393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.479 [2024-07-15 16:53:44.067489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.479 [2024-07-15 16:53:44.067585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.479 [2024-07-15 16:53:44.067586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.412 16:53:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:38.412 16:53:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:11:38.412 16:53:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:39.345 malloc0 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.345 16:53:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:39.345 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.345 00:11:39.345 00:11:39.345 CUnit - A unit testing framework for C - Version 2.1-3 00:11:39.345 http://cunit.sourceforge.net/ 00:11:39.345 00:11:39.345 00:11:39.345 Suite: nvme_compliance 00:11:39.345 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 16:53:45.982029] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:39.345 [2024-07-15 16:53:45.983360] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:39.345 [2024-07-15 16:53:45.983376] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:39.345 [2024-07-15 16:53:45.983381] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:39.345 [2024-07-15 16:53:45.987056] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:39.603 passed 00:11:39.603 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 16:53:46.064613] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:39.603 [2024-07-15 16:53:46.067634] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:39.603 passed 00:11:39.603 Test: admin_identify_ns ...[2024-07-15 16:53:46.147569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:39.603 [2024-07-15 16:53:46.207232] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:39.603 [2024-07-15 16:53:46.215232] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:39.603 [2024-07-15 16:53:46.236319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:39.603 passed 00:11:39.860 Test: admin_get_features_mandatory_features ...[2024-07-15 16:53:46.313633] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:39.860 [2024-07-15 16:53:46.316655] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:39.860 passed 00:11:39.860 Test: admin_get_features_optional_features ...[2024-07-15 16:53:46.397219] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:39.860 [2024-07-15 16:53:46.400242] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:39.860 passed 00:11:39.860 Test: admin_set_features_number_of_queues ...[2024-07-15 16:53:46.478177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:40.118 [2024-07-15 16:53:46.583341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:40.118 passed 00:11:40.118 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 16:53:46.658601] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:40.118 [2024-07-15 16:53:46.661620] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:40.118 passed 00:11:40.118 Test: admin_get_log_page_with_lpo ...[2024-07-15 16:53:46.739559] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:40.376 [2024-07-15 16:53:46.808237] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:40.376 [2024-07-15 16:53:46.821295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:40.376 passed 00:11:40.376 Test: fabric_property_get ...[2024-07-15 16:53:46.896477] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:40.376 [2024-07-15 16:53:46.897707] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:40.376 [2024-07-15 16:53:46.901523] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:40.376 passed 00:11:40.376 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 16:53:46.980037] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:40.376 [2024-07-15 16:53:46.981273] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:40.376 [2024-07-15 16:53:46.983060] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:40.376 passed 00:11:40.634 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 16:53:47.061777] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:40.634 [2024-07-15 16:53:47.146230] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:40.634 [2024-07-15 16:53:47.162233] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:40.634 [2024-07-15 16:53:47.167320] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:40.634 passed 00:11:40.634 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 16:53:47.244486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:40.634 [2024-07-15 16:53:47.245720] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:40.634 [2024-07-15 16:53:47.247518] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:40.634 passed 00:11:40.890 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 16:53:47.324815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:40.890 [2024-07-15 16:53:47.401231] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:40.890 [2024-07-15 16:53:47.425228] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:40.890 [2024-07-15 16:53:47.430315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:40.890 passed 00:11:40.890 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 16:53:47.504438] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:40.890 [2024-07-15 16:53:47.505670] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:40.890 [2024-07-15 16:53:47.505693] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:40.890 [2024-07-15 16:53:47.508465] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:40.890 passed 00:11:41.147 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 16:53:47.586427] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:41.147 [2024-07-15 16:53:47.677230] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:41.147 [2024-07-15 16:53:47.685231] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:41.147 [2024-07-15 16:53:47.693235] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:41.147 [2024-07-15 16:53:47.701237] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:41.147 [2024-07-15 16:53:47.730307] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:41.147 passed 00:11:41.147 Test: admin_create_io_sq_verify_pc ...[2024-07-15 16:53:47.806505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:41.405 [2024-07-15 16:53:47.823242] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:41.405 [2024-07-15 16:53:47.840692] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:41.405 passed 00:11:41.405 Test: admin_create_io_qp_max_qps ...[2024-07-15 16:53:47.918237] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:42.777 [2024-07-15 16:53:49.020233] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:42.777 [2024-07-15 16:53:49.405464] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:42.777 passed 00:11:43.035 Test: admin_create_io_sq_shared_cq ...[2024-07-15 16:53:49.482638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:43.035 [2024-07-15 16:53:49.615235] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:43.035 [2024-07-15 16:53:49.652294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:43.035 passed 00:11:43.035 00:11:43.035 Run Summary: Type Total Ran Passed Failed Inactive 00:11:43.035 suites 1 1 n/a 0 0 00:11:43.035 tests 18 18 18 0 0 00:11:43.035 asserts 360 360 360 0 n/a 00:11:43.035 00:11:43.035 Elapsed time = 1.513 seconds 00:11:43.035 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 9810 00:11:43.035 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 9810 ']' 00:11:43.035 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 9810 00:11:43.035 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:11:43.035 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:43.035 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 9810 00:11:43.293 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:43.293 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:43.293 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 9810' 00:11:43.293 killing process with pid 9810 00:11:43.293 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 9810 00:11:43.293 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 9810 00:11:43.293 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:43.293 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:43.293 00:11:43.293 real 0m6.147s 00:11:43.293 user 0m17.640s 00:11:43.293 sys 0m0.423s 00:11:43.293 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.293 16:53:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:43.293 ************************************ 00:11:43.293 END TEST nvmf_vfio_user_nvme_compliance 00:11:43.294 ************************************ 00:11:43.552 16:53:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:43.552 16:53:49 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:43.552 16:53:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:43.552 16:53:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.552 16:53:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:43.552 ************************************ 00:11:43.552 START TEST nvmf_vfio_user_fuzz 00:11:43.552 ************************************ 00:11:43.552 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:43.552 * Looking for test storage... 00:11:43.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.552 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.552 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:11:43.552 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.552 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=10798 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 10798' 00:11:43.553 Process pid: 10798 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 10798 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 10798 ']' 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.553 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:44.499 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.499 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:11:44.499 16:53:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:45.433 malloc0 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.433 16:53:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:45.433 16:53:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.433 16:53:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:45.433 16:53:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.433 16:53:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:45.433 16:53:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.433 16:53:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:45.433 16:53:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:17.566 Fuzzing completed. Shutting down the fuzz application 00:12:17.566 00:12:17.566 Dumping successful admin opcodes: 00:12:17.566 8, 9, 10, 24, 00:12:17.566 Dumping successful io opcodes: 00:12:17.566 0, 00:12:17.566 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1001788, total successful commands: 3927, random_seed: 3538098496 00:12:17.566 NS: 0x200003a1ef00 admin qp, Total commands completed: 247019, total successful commands: 1995, random_seed: 2240503872 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 10798 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 10798 ']' 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 10798 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 10798 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 10798' 00:12:17.566 killing process with pid 10798 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 10798 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 10798 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:17.566 00:12:17.566 real 0m32.719s 00:12:17.566 user 0m31.395s 00:12:17.566 sys 0m30.171s 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:17.566 16:54:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:17.566 ************************************ 00:12:17.566 END TEST nvmf_vfio_user_fuzz 00:12:17.567 ************************************ 00:12:17.567 16:54:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:17.567 16:54:22 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:17.567 16:54:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:17.567 16:54:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.567 16:54:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:17.567 ************************************ 00:12:17.567 START TEST nvmf_host_management 00:12:17.567 ************************************ 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:17.567 * Looking for test storage... 00:12:17.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:17.567 16:54:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:21.751 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:21.751 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:21.751 Found net devices under 0000:86:00.0: cvl_0_0 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:21.751 Found net devices under 0000:86:00.1: cvl_0_1 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:21.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:12:21.751 00:12:21.751 --- 10.0.0.2 ping statistics --- 00:12:21.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.751 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:12:21.751 00:12:21.751 --- 10.0.0.1 ping statistics --- 00:12:21.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.751 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=19605 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 19605 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 19605 ']' 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:21.751 16:54:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:21.751 [2024-07-15 16:54:27.835497] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:12:21.751 [2024-07-15 16:54:27.835543] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.751 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.751 [2024-07-15 16:54:27.891981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.751 [2024-07-15 16:54:27.973614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.751 [2024-07-15 16:54:27.973650] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.751 [2024-07-15 16:54:27.973657] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.751 [2024-07-15 16:54:27.973663] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.751 [2024-07-15 16:54:27.973668] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.752 [2024-07-15 16:54:27.973703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.752 [2024-07-15 16:54:27.973789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.752 [2024-07-15 16:54:27.973896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.752 [2024-07-15 16:54:27.973897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:22.008 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.008 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:22.008 16:54:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.008 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:22.008 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.008 16:54:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.008 16:54:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.008 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.266 [2024-07-15 16:54:28.683347] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.266 Malloc0 00:12:22.266 [2024-07-15 16:54:28.742969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=19873 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 19873 /var/tmp/bdevperf.sock 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 19873 ']' 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:22.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:22.266 { 00:12:22.266 "params": { 00:12:22.266 "name": "Nvme$subsystem", 00:12:22.266 "trtype": "$TEST_TRANSPORT", 00:12:22.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:22.266 "adrfam": "ipv4", 00:12:22.266 "trsvcid": "$NVMF_PORT", 00:12:22.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:22.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:22.266 "hdgst": ${hdgst:-false}, 00:12:22.266 "ddgst": ${ddgst:-false} 00:12:22.266 }, 00:12:22.266 "method": "bdev_nvme_attach_controller" 00:12:22.266 } 00:12:22.266 EOF 00:12:22.266 )") 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:22.266 16:54:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:22.266 "params": { 00:12:22.266 "name": "Nvme0", 00:12:22.266 "trtype": "tcp", 00:12:22.266 "traddr": "10.0.0.2", 00:12:22.266 "adrfam": "ipv4", 00:12:22.266 "trsvcid": "4420", 00:12:22.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:22.266 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:22.266 "hdgst": false, 00:12:22.266 "ddgst": false 00:12:22.266 }, 00:12:22.266 "method": "bdev_nvme_attach_controller" 00:12:22.266 }' 00:12:22.266 [2024-07-15 16:54:28.833005] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:12:22.266 [2024-07-15 16:54:28.833053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid19873 ] 00:12:22.266 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.266 [2024-07-15 16:54:28.887218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.522 [2024-07-15 16:54:28.960734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.522 Running I/O for 10 seconds... 00:12:23.088 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:23.089 [2024-07-15 16:54:29.726276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177d460 is same with the state(5) to be set 00:12:23.089 [2024-07-15 16:54:29.726318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177d460 is same with the state(5) to be set 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.089 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:23.089 [2024-07-15 16:54:29.737962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.089 [2024-07-15 16:54:29.737999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.089 [2024-07-15 16:54:29.738016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.089 [2024-07-15 16:54:29.738030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.089 [2024-07-15 16:54:29.738044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ce980 is same with the state(5) to be set 00:12:23.089 [2024-07-15 16:54:29.738129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.089 [2024-07-15 16:54:29.738519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.089 [2024-07-15 16:54:29.738526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.738989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.738999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.739006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.739014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.739021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.739030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.739037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.739045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.739052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.739061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 16:54:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.090 [2024-07-15 16:54:29.739069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.739081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.739088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.739096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.739103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.739111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.739118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.739127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.090 [2024-07-15 16:54:29.739133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.090 [2024-07-15 16:54:29.739194] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcdfb20 was disconnected and freed. reset controller. 00:12:23.090 16:54:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:23.090 [2024-07-15 16:54:29.740097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:23.090 task offset: 0 on job bdev=Nvme0n1 fails 00:12:23.090 00:12:23.090 Latency(us) 00:12:23.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.090 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:23.090 Job: Nvme0n1 ended in about 0.58 seconds with error 00:12:23.090 Verification LBA range: start 0x0 length 0x400 00:12:23.091 Nvme0n1 : 0.58 1761.34 110.08 110.08 0.00 33494.98 1709.63 28493.91 00:12:23.091 =================================================================================================================== 00:12:23.091 Total : 1761.34 110.08 110.08 0.00 33494.98 1709.63 28493.91 00:12:23.091 [2024-07-15 16:54:29.741686] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:23.091 [2024-07-15 16:54:29.741701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ce980 (9): Bad file descriptor 00:12:23.348 [2024-07-15 16:54:29.789532] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:24.281 16:54:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 19873 00:12:24.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (19873) - No such process 00:12:24.281 16:54:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:24.281 16:54:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:24.281 16:54:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:24.281 16:54:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:24.281 16:54:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:24.281 16:54:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:24.281 16:54:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:24.281 16:54:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:24.281 { 00:12:24.281 "params": { 00:12:24.281 "name": "Nvme$subsystem", 00:12:24.281 "trtype": "$TEST_TRANSPORT", 00:12:24.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:24.281 "adrfam": "ipv4", 00:12:24.281 "trsvcid": "$NVMF_PORT", 00:12:24.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:24.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:24.281 "hdgst": ${hdgst:-false}, 00:12:24.281 "ddgst": ${ddgst:-false} 00:12:24.281 }, 00:12:24.281 "method": "bdev_nvme_attach_controller" 00:12:24.281 } 00:12:24.281 EOF 00:12:24.281 )") 00:12:24.281 16:54:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:24.281 16:54:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:24.281 16:54:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:24.281 16:54:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:24.281 "params": { 00:12:24.281 "name": "Nvme0", 00:12:24.281 "trtype": "tcp", 00:12:24.281 "traddr": "10.0.0.2", 00:12:24.281 "adrfam": "ipv4", 00:12:24.281 "trsvcid": "4420", 00:12:24.281 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:24.281 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:24.281 "hdgst": false, 00:12:24.281 "ddgst": false 00:12:24.281 }, 00:12:24.281 "method": "bdev_nvme_attach_controller" 00:12:24.281 }' 00:12:24.281 [2024-07-15 16:54:30.794396] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:12:24.281 [2024-07-15 16:54:30.794444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20139 ] 00:12:24.281 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.281 [2024-07-15 16:54:30.848257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.281 [2024-07-15 16:54:30.921996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.539 Running I/O for 1 seconds... 00:12:25.473 00:12:25.473 Latency(us) 00:12:25.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.473 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:25.473 Verification LBA range: start 0x0 length 0x400 00:12:25.473 Nvme0n1 : 1.01 1829.29 114.33 0.00 0.00 34457.06 7522.39 29063.79 00:12:25.473 =================================================================================================================== 00:12:25.473 Total : 1829.29 114.33 0.00 0.00 34457.06 7522.39 29063.79 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:25.730 rmmod nvme_tcp 00:12:25.730 rmmod nvme_fabrics 00:12:25.730 rmmod nvme_keyring 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:25.730 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 19605 ']' 00:12:25.731 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 19605 00:12:25.731 16:54:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 19605 ']' 00:12:25.731 16:54:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 19605 00:12:25.731 16:54:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:25.731 16:54:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:25.731 16:54:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 19605 00:12:25.989 16:54:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:25.989 16:54:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:25.989 16:54:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 19605' 00:12:25.989 killing process with pid 19605 00:12:25.989 16:54:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 19605 00:12:25.989 16:54:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 19605 00:12:25.989 [2024-07-15 16:54:32.582880] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:25.989 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:25.989 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:25.989 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:25.989 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:25.989 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:25.989 16:54:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.989 16:54:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.989 16:54:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.523 16:54:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:28.523 16:54:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:28.523 00:12:28.523 real 0m11.874s 00:12:28.523 user 0m22.194s 00:12:28.523 sys 0m4.857s 00:12:28.523 16:54:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:28.523 16:54:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:28.523 ************************************ 00:12:28.523 END TEST nvmf_host_management 00:12:28.523 ************************************ 00:12:28.523 16:54:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:28.523 16:54:34 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:28.523 16:54:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:28.523 16:54:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.523 16:54:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:28.523 ************************************ 00:12:28.523 START TEST nvmf_lvol 00:12:28.523 ************************************ 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:28.523 * Looking for test storage... 00:12:28.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:28.523 16:54:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:33.786 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:33.786 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.786 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:33.787 Found net devices under 0000:86:00.0: cvl_0_0 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:33.787 Found net devices under 0000:86:00.1: cvl_0_1 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:33.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:12:33.787 00:12:33.787 --- 10.0.0.2 ping statistics --- 00:12:33.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.787 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:12:33.787 00:12:33.787 --- 10.0.0.1 ping statistics --- 00:12:33.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.787 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=23872 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 23872 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 23872 ']' 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:33.787 16:54:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:33.787 [2024-07-15 16:54:39.919372] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:12:33.787 [2024-07-15 16:54:39.919414] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.787 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.787 [2024-07-15 16:54:39.975318] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.787 [2024-07-15 16:54:40.063179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.787 [2024-07-15 16:54:40.063214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.787 [2024-07-15 16:54:40.063221] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.787 [2024-07-15 16:54:40.063231] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.787 [2024-07-15 16:54:40.063237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.787 [2024-07-15 16:54:40.063281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.787 [2024-07-15 16:54:40.063375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.787 [2024-07-15 16:54:40.063377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.351 16:54:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:34.351 16:54:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:12:34.351 16:54:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:34.352 16:54:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:34.352 16:54:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:34.352 16:54:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.352 16:54:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:34.352 [2024-07-15 16:54:40.918388] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.352 16:54:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:34.609 16:54:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:34.609 16:54:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:34.867 16:54:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:34.867 16:54:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:34.867 16:54:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:35.124 16:54:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=84a96477-bc7b-45a1-865c-98fd2e27326b 00:12:35.124 16:54:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 84a96477-bc7b-45a1-865c-98fd2e27326b lvol 20 00:12:35.381 16:54:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c554c730-0124-467e-9d5e-305698b5dd93 00:12:35.381 16:54:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:35.638 16:54:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c554c730-0124-467e-9d5e-305698b5dd93 00:12:35.638 16:54:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:35.896 [2024-07-15 16:54:42.425811] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.896 16:54:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:36.153 16:54:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=24364 00:12:36.153 16:54:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:36.153 16:54:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:36.153 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.083 16:54:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c554c730-0124-467e-9d5e-305698b5dd93 MY_SNAPSHOT 00:12:37.340 16:54:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b6f586eb-6e4e-44ab-9c55-f6d84872a565 00:12:37.340 16:54:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c554c730-0124-467e-9d5e-305698b5dd93 30 00:12:37.598 16:54:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b6f586eb-6e4e-44ab-9c55-f6d84872a565 MY_CLONE 00:12:37.855 16:54:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=44831c47-f9ae-4fd7-b7dd-d0f220e70401 00:12:37.855 16:54:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 44831c47-f9ae-4fd7-b7dd-d0f220e70401 00:12:38.420 16:54:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 24364 00:12:46.562 Initializing NVMe Controllers 00:12:46.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:46.562 Controller IO queue size 128, less than required. 00:12:46.562 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:46.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:46.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:46.562 Initialization complete. Launching workers. 00:12:46.562 ======================================================== 00:12:46.562 Latency(us) 00:12:46.562 Device Information : IOPS MiB/s Average min max 00:12:46.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12208.80 47.69 10491.05 1891.66 71711.63 00:12:46.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12160.80 47.50 10532.78 3809.67 47259.98 00:12:46.562 ======================================================== 00:12:46.562 Total : 24369.60 95.19 10511.87 1891.66 71711.63 00:12:46.562 00:12:46.562 16:54:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:46.562 16:54:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c554c730-0124-467e-9d5e-305698b5dd93 00:12:46.819 16:54:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 84a96477-bc7b-45a1-865c-98fd2e27326b 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:47.077 rmmod nvme_tcp 00:12:47.077 rmmod nvme_fabrics 00:12:47.077 rmmod nvme_keyring 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 23872 ']' 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 23872 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 23872 ']' 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 23872 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:47.077 16:54:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 23872 00:12:47.078 16:54:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:47.078 16:54:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:47.078 16:54:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 23872' 00:12:47.078 killing process with pid 23872 00:12:47.078 16:54:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 23872 00:12:47.078 16:54:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 23872 00:12:47.336 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:47.336 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:47.336 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:47.336 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:47.336 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:47.336 16:54:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.336 16:54:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.336 16:54:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.865 16:54:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:49.865 00:12:49.865 real 0m21.205s 00:12:49.865 user 1m3.853s 00:12:49.865 sys 0m6.480s 00:12:49.865 16:54:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:49.865 16:54:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:49.865 ************************************ 00:12:49.865 END TEST nvmf_lvol 00:12:49.865 ************************************ 00:12:49.865 16:54:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:49.865 16:54:55 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:49.865 16:54:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:49.865 16:54:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.865 16:54:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:49.865 ************************************ 00:12:49.866 START TEST nvmf_lvs_grow 00:12:49.866 ************************************ 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:49.866 * Looking for test storage... 00:12:49.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:49.866 16:54:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:55.165 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:55.165 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.165 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:55.166 Found net devices under 0000:86:00.0: cvl_0_0 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:55.166 Found net devices under 0000:86:00.1: cvl_0_1 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:55.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:12:55.166 00:12:55.166 --- 10.0.0.2 ping statistics --- 00:12:55.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.166 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:12:55.166 00:12:55.166 --- 10.0.0.1 ping statistics --- 00:12:55.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.166 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=29548 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 29548 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 29548 ']' 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:55.166 16:55:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:55.166 [2024-07-15 16:55:01.371031] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:12:55.166 [2024-07-15 16:55:01.371075] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.166 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.166 [2024-07-15 16:55:01.427590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.166 [2024-07-15 16:55:01.510130] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.166 [2024-07-15 16:55:01.510161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.166 [2024-07-15 16:55:01.510168] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.166 [2024-07-15 16:55:01.510174] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.166 [2024-07-15 16:55:01.510179] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.166 [2024-07-15 16:55:01.510198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.731 16:55:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:55.731 16:55:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:12:55.731 16:55:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:55.731 16:55:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:55.731 16:55:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:55.731 16:55:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.731 16:55:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:55.731 [2024-07-15 16:55:02.358725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.731 16:55:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:55.731 16:55:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:55.731 16:55:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.731 16:55:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:55.988 ************************************ 00:12:55.988 START TEST lvs_grow_clean 00:12:55.988 ************************************ 00:12:55.988 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:12:55.988 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:55.988 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:55.988 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:55.988 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:55.988 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:55.988 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:55.988 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:55.988 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:55.988 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:55.988 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:55.988 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:56.266 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c8e35555-1fdb-41bd-b4dd-98ae44cb9840 00:12:56.266 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8e35555-1fdb-41bd-b4dd-98ae44cb9840 00:12:56.266 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:56.524 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:56.524 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:56.524 16:55:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c8e35555-1fdb-41bd-b4dd-98ae44cb9840 lvol 150 00:12:56.524 16:55:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=01e0a3ca-cccf-4d32-a0a6-7c45a0b153ce 00:12:56.524 16:55:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:56.524 16:55:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:56.782 [2024-07-15 16:55:03.311963] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:56.782 [2024-07-15 16:55:03.312010] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:56.782 true 00:12:56.782 16:55:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8e35555-1fdb-41bd-b4dd-98ae44cb9840 00:12:56.782 16:55:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:57.039 16:55:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:57.039 16:55:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:57.039 16:55:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 01e0a3ca-cccf-4d32-a0a6-7c45a0b153ce 00:12:57.296 16:55:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:57.553 [2024-07-15 16:55:03.986003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.553 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:57.553 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=30093 00:12:57.553 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:57.553 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 30093 /var/tmp/bdevperf.sock 00:12:57.553 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 30093 ']' 00:12:57.553 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:57.553 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:57.553 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:57.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:57.553 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:57.553 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:57.553 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:57.553 [2024-07-15 16:55:04.206831] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:12:57.553 [2024-07-15 16:55:04.206880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid30093 ] 00:12:57.811 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.811 [2024-07-15 16:55:04.261691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.811 [2024-07-15 16:55:04.340732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.377 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:58.377 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:12:58.377 16:55:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:58.943 Nvme0n1 00:12:58.943 16:55:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:58.943 [ 00:12:58.943 { 00:12:58.943 "name": "Nvme0n1", 00:12:58.943 "aliases": [ 00:12:58.943 "01e0a3ca-cccf-4d32-a0a6-7c45a0b153ce" 00:12:58.943 ], 00:12:58.943 "product_name": "NVMe disk", 00:12:58.943 "block_size": 4096, 00:12:58.943 "num_blocks": 38912, 00:12:58.943 "uuid": "01e0a3ca-cccf-4d32-a0a6-7c45a0b153ce", 00:12:58.943 "assigned_rate_limits": { 00:12:58.943 "rw_ios_per_sec": 0, 00:12:58.943 "rw_mbytes_per_sec": 0, 00:12:58.943 "r_mbytes_per_sec": 0, 00:12:58.943 "w_mbytes_per_sec": 0 00:12:58.943 }, 00:12:58.943 "claimed": false, 00:12:58.943 "zoned": false, 00:12:58.943 "supported_io_types": { 00:12:58.943 "read": true, 00:12:58.943 "write": true, 00:12:58.943 "unmap": true, 00:12:58.943 "flush": true, 00:12:58.943 "reset": true, 00:12:58.943 "nvme_admin": true, 00:12:58.943 "nvme_io": true, 00:12:58.943 "nvme_io_md": false, 00:12:58.943 "write_zeroes": true, 00:12:58.943 "zcopy": false, 00:12:58.943 "get_zone_info": false, 00:12:58.943 "zone_management": false, 00:12:58.943 "zone_append": false, 00:12:58.943 "compare": true, 00:12:58.943 "compare_and_write": true, 00:12:58.943 "abort": true, 00:12:58.943 "seek_hole": false, 00:12:58.943 "seek_data": false, 00:12:58.943 "copy": true, 00:12:58.943 "nvme_iov_md": false 00:12:58.943 }, 00:12:58.943 "memory_domains": [ 00:12:58.943 { 00:12:58.943 "dma_device_id": "system", 00:12:58.943 "dma_device_type": 1 00:12:58.943 } 00:12:58.943 ], 00:12:58.943 "driver_specific": { 00:12:58.943 "nvme": [ 00:12:58.943 { 00:12:58.943 "trid": { 00:12:58.943 "trtype": "TCP", 00:12:58.943 "adrfam": "IPv4", 00:12:58.943 "traddr": "10.0.0.2", 00:12:58.943 "trsvcid": "4420", 00:12:58.943 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:58.943 }, 00:12:58.943 "ctrlr_data": { 00:12:58.943 "cntlid": 1, 00:12:58.943 "vendor_id": "0x8086", 00:12:58.943 "model_number": "SPDK bdev Controller", 00:12:58.943 "serial_number": "SPDK0", 00:12:58.943 "firmware_revision": "24.09", 00:12:58.943 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:58.943 "oacs": { 00:12:58.943 "security": 0, 00:12:58.943 "format": 0, 00:12:58.943 "firmware": 0, 00:12:58.943 "ns_manage": 0 00:12:58.943 }, 00:12:58.943 "multi_ctrlr": true, 00:12:58.943 "ana_reporting": false 00:12:58.943 }, 00:12:58.943 "vs": { 00:12:58.943 "nvme_version": "1.3" 00:12:58.943 }, 00:12:58.943 "ns_data": { 00:12:58.943 "id": 1, 00:12:58.943 "can_share": true 00:12:58.943 } 00:12:58.943 } 00:12:58.943 ], 00:12:58.943 "mp_policy": "active_passive" 00:12:58.943 } 00:12:58.943 } 00:12:58.943 ] 00:12:58.943 16:55:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=30342 00:12:58.943 16:55:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:58.943 16:55:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:59.201 Running I/O for 10 seconds... 00:13:00.135 Latency(us) 00:13:00.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.135 Nvme0n1 : 1.00 22134.00 86.46 0.00 0.00 0.00 0.00 0.00 00:13:00.135 =================================================================================================================== 00:13:00.135 Total : 22134.00 86.46 0.00 0.00 0.00 0.00 0.00 00:13:00.135 00:13:01.069 16:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c8e35555-1fdb-41bd-b4dd-98ae44cb9840 00:13:01.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:01.069 Nvme0n1 : 2.00 22191.00 86.68 0.00 0.00 0.00 0.00 0.00 00:13:01.069 =================================================================================================================== 00:13:01.069 Total : 22191.00 86.68 0.00 0.00 0.00 0.00 0.00 00:13:01.069 00:13:01.069 true 00:13:01.327 16:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8e35555-1fdb-41bd-b4dd-98ae44cb9840 00:13:01.327 16:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:01.327 16:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:01.327 16:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:01.327 16:55:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 30342 00:13:02.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.261 Nvme0n1 : 3.00 22215.33 86.78 0.00 0.00 0.00 0.00 0.00 00:13:02.261 =================================================================================================================== 00:13:02.261 Total : 22215.33 86.78 0.00 0.00 0.00 0.00 0.00 00:13:02.261 00:13:03.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.195 Nvme0n1 : 4.00 22267.50 86.98 0.00 0.00 0.00 0.00 0.00 00:13:03.195 =================================================================================================================== 00:13:03.195 Total : 22267.50 86.98 0.00 0.00 0.00 0.00 0.00 00:13:03.195 00:13:04.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:04.129 Nvme0n1 : 5.00 22278.00 87.02 0.00 0.00 0.00 0.00 0.00 00:13:04.129 =================================================================================================================== 00:13:04.129 Total : 22278.00 87.02 0.00 0.00 0.00 0.00 0.00 00:13:04.129 00:13:05.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.061 Nvme0n1 : 6.00 22311.67 87.15 0.00 0.00 0.00 0.00 0.00 00:13:05.061 =================================================================================================================== 00:13:05.061 Total : 22311.67 87.15 0.00 0.00 0.00 0.00 0.00 00:13:05.061 00:13:06.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:06.033 Nvme0n1 : 7.00 22333.43 87.24 0.00 0.00 0.00 0.00 0.00 00:13:06.033 =================================================================================================================== 00:13:06.033 Total : 22333.43 87.24 0.00 0.00 0.00 0.00 0.00 00:13:06.033 00:13:07.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:07.406 Nvme0n1 : 8.00 22361.75 87.35 0.00 0.00 0.00 0.00 0.00 00:13:07.406 =================================================================================================================== 00:13:07.406 Total : 22361.75 87.35 0.00 0.00 0.00 0.00 0.00 00:13:07.406 00:13:08.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.339 Nvme0n1 : 9.00 22380.22 87.42 0.00 0.00 0.00 0.00 0.00 00:13:08.339 =================================================================================================================== 00:13:08.339 Total : 22380.22 87.42 0.00 0.00 0.00 0.00 0.00 00:13:08.339 00:13:09.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.272 Nvme0n1 : 10.00 22395.00 87.48 0.00 0.00 0.00 0.00 0.00 00:13:09.272 =================================================================================================================== 00:13:09.272 Total : 22395.00 87.48 0.00 0.00 0.00 0.00 0.00 00:13:09.272 00:13:09.272 00:13:09.272 Latency(us) 00:13:09.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.273 Nvme0n1 : 10.01 22394.98 87.48 0.00 0.00 5711.41 1894.85 7864.32 00:13:09.273 =================================================================================================================== 00:13:09.273 Total : 22394.98 87.48 0.00 0.00 5711.41 1894.85 7864.32 00:13:09.273 0 00:13:09.273 16:55:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 30093 00:13:09.273 16:55:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 30093 ']' 00:13:09.273 16:55:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 30093 00:13:09.273 16:55:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:09.273 16:55:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:09.273 16:55:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 30093 00:13:09.273 16:55:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:09.273 16:55:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:09.273 16:55:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 30093' 00:13:09.273 killing process with pid 30093 00:13:09.273 16:55:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 30093 00:13:09.273 Received shutdown signal, test time was about 10.000000 seconds 00:13:09.273 00:13:09.273 Latency(us) 00:13:09.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.273 =================================================================================================================== 00:13:09.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:09.273 16:55:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 30093 00:13:09.273 16:55:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:09.529 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:09.787 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8e35555-1fdb-41bd-b4dd-98ae44cb9840 00:13:09.787 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:09.787 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:09.787 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:09.787 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:10.045 [2024-07-15 16:55:16.600811] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:10.045 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8e35555-1fdb-41bd-b4dd-98ae44cb9840 00:13:10.045 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:10.045 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8e35555-1fdb-41bd-b4dd-98ae44cb9840 00:13:10.045 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.045 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.045 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.045 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.045 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.045 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.045 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.045 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:10.045 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8e35555-1fdb-41bd-b4dd-98ae44cb9840 00:13:10.302 request: 00:13:10.302 { 00:13:10.302 "uuid": "c8e35555-1fdb-41bd-b4dd-98ae44cb9840", 00:13:10.302 "method": "bdev_lvol_get_lvstores", 00:13:10.302 "req_id": 1 00:13:10.302 } 00:13:10.302 Got JSON-RPC error response 00:13:10.302 response: 00:13:10.302 { 00:13:10.302 "code": -19, 00:13:10.302 "message": "No such device" 00:13:10.302 } 00:13:10.302 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:10.302 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:10.302 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:10.302 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:10.302 16:55:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:10.559 aio_bdev 00:13:10.559 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 01e0a3ca-cccf-4d32-a0a6-7c45a0b153ce 00:13:10.559 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=01e0a3ca-cccf-4d32-a0a6-7c45a0b153ce 00:13:10.559 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:10.559 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:10.559 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:10.559 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:10.559 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:10.559 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 01e0a3ca-cccf-4d32-a0a6-7c45a0b153ce -t 2000 00:13:10.817 [ 00:13:10.817 { 00:13:10.817 "name": "01e0a3ca-cccf-4d32-a0a6-7c45a0b153ce", 00:13:10.817 "aliases": [ 00:13:10.817 "lvs/lvol" 00:13:10.817 ], 00:13:10.817 "product_name": "Logical Volume", 00:13:10.817 "block_size": 4096, 00:13:10.817 "num_blocks": 38912, 00:13:10.817 "uuid": "01e0a3ca-cccf-4d32-a0a6-7c45a0b153ce", 00:13:10.817 "assigned_rate_limits": { 00:13:10.817 "rw_ios_per_sec": 0, 00:13:10.817 "rw_mbytes_per_sec": 0, 00:13:10.817 "r_mbytes_per_sec": 0, 00:13:10.817 "w_mbytes_per_sec": 0 00:13:10.817 }, 00:13:10.817 "claimed": false, 00:13:10.817 "zoned": false, 00:13:10.817 "supported_io_types": { 00:13:10.817 "read": true, 00:13:10.817 "write": true, 00:13:10.817 "unmap": true, 00:13:10.817 "flush": false, 00:13:10.817 "reset": true, 00:13:10.818 "nvme_admin": false, 00:13:10.818 "nvme_io": false, 00:13:10.818 "nvme_io_md": false, 00:13:10.818 "write_zeroes": true, 00:13:10.818 "zcopy": false, 00:13:10.818 "get_zone_info": false, 00:13:10.818 "zone_management": false, 00:13:10.818 "zone_append": false, 00:13:10.818 "compare": false, 00:13:10.818 "compare_and_write": false, 00:13:10.818 "abort": false, 00:13:10.818 "seek_hole": true, 00:13:10.818 "seek_data": true, 00:13:10.818 "copy": false, 00:13:10.818 "nvme_iov_md": false 00:13:10.818 }, 00:13:10.818 "driver_specific": { 00:13:10.818 "lvol": { 00:13:10.818 "lvol_store_uuid": "c8e35555-1fdb-41bd-b4dd-98ae44cb9840", 00:13:10.818 "base_bdev": "aio_bdev", 00:13:10.818 "thin_provision": false, 00:13:10.818 "num_allocated_clusters": 38, 00:13:10.818 "snapshot": false, 00:13:10.818 "clone": false, 00:13:10.818 "esnap_clone": false 00:13:10.818 } 00:13:10.818 } 00:13:10.818 } 00:13:10.818 ] 00:13:10.818 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:10.818 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8e35555-1fdb-41bd-b4dd-98ae44cb9840 00:13:10.818 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:11.075 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:11.075 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c8e35555-1fdb-41bd-b4dd-98ae44cb9840 00:13:11.075 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:11.076 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:11.076 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 01e0a3ca-cccf-4d32-a0a6-7c45a0b153ce 00:13:11.333 16:55:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c8e35555-1fdb-41bd-b4dd-98ae44cb9840 00:13:11.591 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:11.591 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:11.591 00:13:11.591 real 0m15.830s 00:13:11.591 user 0m15.440s 00:13:11.591 sys 0m1.446s 00:13:11.591 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.591 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:11.591 ************************************ 00:13:11.591 END TEST lvs_grow_clean 00:13:11.591 ************************************ 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:11.849 ************************************ 00:13:11.849 START TEST lvs_grow_dirty 00:13:11.849 ************************************ 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:11.849 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:12.107 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:12.107 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:12.107 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:12.364 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:12.364 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:12.364 16:55:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 80d1750e-039b-4a25-9e1b-e0f178730ded lvol 150 00:13:12.364 16:55:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e90366da-b897-4fbf-a0f2-136ea97b0c89 00:13:12.364 16:55:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:12.364 16:55:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:12.622 [2024-07-15 16:55:19.180860] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:12.622 [2024-07-15 16:55:19.180911] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:12.622 true 00:13:12.622 16:55:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:12.622 16:55:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:12.880 16:55:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:12.880 16:55:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:12.880 16:55:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e90366da-b897-4fbf-a0f2-136ea97b0c89 00:13:13.137 16:55:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:13.394 [2024-07-15 16:55:19.870935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.394 16:55:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:13.394 16:55:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=32816 00:13:13.394 16:55:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:13.394 16:55:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:13.394 16:55:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 32816 /var/tmp/bdevperf.sock 00:13:13.394 16:55:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 32816 ']' 00:13:13.394 16:55:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:13.394 16:55:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.394 16:55:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:13.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:13.394 16:55:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.394 16:55:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:13.652 [2024-07-15 16:55:20.097901] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:13:13.652 [2024-07-15 16:55:20.097950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid32816 ] 00:13:13.652 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.652 [2024-07-15 16:55:20.152578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.652 [2024-07-15 16:55:20.225922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.585 16:55:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.585 16:55:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:14.585 16:55:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:14.843 Nvme0n1 00:13:14.843 16:55:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:14.843 [ 00:13:14.843 { 00:13:14.843 "name": "Nvme0n1", 00:13:14.843 "aliases": [ 00:13:14.843 "e90366da-b897-4fbf-a0f2-136ea97b0c89" 00:13:14.843 ], 00:13:14.843 "product_name": "NVMe disk", 00:13:14.843 "block_size": 4096, 00:13:14.843 "num_blocks": 38912, 00:13:14.843 "uuid": "e90366da-b897-4fbf-a0f2-136ea97b0c89", 00:13:14.843 "assigned_rate_limits": { 00:13:14.843 "rw_ios_per_sec": 0, 00:13:14.843 "rw_mbytes_per_sec": 0, 00:13:14.843 "r_mbytes_per_sec": 0, 00:13:14.843 "w_mbytes_per_sec": 0 00:13:14.843 }, 00:13:14.843 "claimed": false, 00:13:14.843 "zoned": false, 00:13:14.843 "supported_io_types": { 00:13:14.843 "read": true, 00:13:14.843 "write": true, 00:13:14.843 "unmap": true, 00:13:14.843 "flush": true, 00:13:14.843 "reset": true, 00:13:14.843 "nvme_admin": true, 00:13:14.843 "nvme_io": true, 00:13:14.843 "nvme_io_md": false, 00:13:14.843 "write_zeroes": true, 00:13:14.843 "zcopy": false, 00:13:14.843 "get_zone_info": false, 00:13:14.843 "zone_management": false, 00:13:14.843 "zone_append": false, 00:13:14.843 "compare": true, 00:13:14.843 "compare_and_write": true, 00:13:14.843 "abort": true, 00:13:14.843 "seek_hole": false, 00:13:14.843 "seek_data": false, 00:13:14.843 "copy": true, 00:13:14.843 "nvme_iov_md": false 00:13:14.843 }, 00:13:14.843 "memory_domains": [ 00:13:14.843 { 00:13:14.843 "dma_device_id": "system", 00:13:14.843 "dma_device_type": 1 00:13:14.843 } 00:13:14.843 ], 00:13:14.843 "driver_specific": { 00:13:14.843 "nvme": [ 00:13:14.843 { 00:13:14.843 "trid": { 00:13:14.843 "trtype": "TCP", 00:13:14.843 "adrfam": "IPv4", 00:13:14.843 "traddr": "10.0.0.2", 00:13:14.843 "trsvcid": "4420", 00:13:14.843 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:14.843 }, 00:13:14.843 "ctrlr_data": { 00:13:14.843 "cntlid": 1, 00:13:14.843 "vendor_id": "0x8086", 00:13:14.843 "model_number": "SPDK bdev Controller", 00:13:14.843 "serial_number": "SPDK0", 00:13:14.843 "firmware_revision": "24.09", 00:13:14.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:14.843 "oacs": { 00:13:14.843 "security": 0, 00:13:14.843 "format": 0, 00:13:14.843 "firmware": 0, 00:13:14.843 "ns_manage": 0 00:13:14.843 }, 00:13:14.843 "multi_ctrlr": true, 00:13:14.843 "ana_reporting": false 00:13:14.843 }, 00:13:14.843 "vs": { 00:13:14.843 "nvme_version": "1.3" 00:13:14.843 }, 00:13:14.843 "ns_data": { 00:13:14.843 "id": 1, 00:13:14.843 "can_share": true 00:13:14.843 } 00:13:14.843 } 00:13:14.843 ], 00:13:14.843 "mp_policy": "active_passive" 00:13:14.843 } 00:13:14.843 } 00:13:14.843 ] 00:13:14.843 16:55:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=33052 00:13:14.843 16:55:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:14.843 16:55:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:15.101 Running I/O for 10 seconds... 00:13:16.034 Latency(us) 00:13:16.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.035 Nvme0n1 : 1.00 22038.00 86.09 0.00 0.00 0.00 0.00 0.00 00:13:16.035 =================================================================================================================== 00:13:16.035 Total : 22038.00 86.09 0.00 0.00 0.00 0.00 0.00 00:13:16.035 00:13:16.969 16:55:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:16.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.969 Nvme0n1 : 2.00 22207.00 86.75 0.00 0.00 0.00 0.00 0.00 00:13:16.969 =================================================================================================================== 00:13:16.969 Total : 22207.00 86.75 0.00 0.00 0.00 0.00 0.00 00:13:16.969 00:13:16.969 true 00:13:16.969 16:55:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:16.969 16:55:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:17.227 16:55:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:17.227 16:55:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:17.227 16:55:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 33052 00:13:18.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.161 Nvme0n1 : 3.00 22226.00 86.82 0.00 0.00 0.00 0.00 0.00 00:13:18.161 =================================================================================================================== 00:13:18.161 Total : 22226.00 86.82 0.00 0.00 0.00 0.00 0.00 00:13:18.161 00:13:19.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:19.094 Nvme0n1 : 4.00 22291.50 87.08 0.00 0.00 0.00 0.00 0.00 00:13:19.094 =================================================================================================================== 00:13:19.094 Total : 22291.50 87.08 0.00 0.00 0.00 0.00 0.00 00:13:19.094 00:13:20.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:20.028 Nvme0n1 : 5.00 22345.20 87.29 0.00 0.00 0.00 0.00 0.00 00:13:20.028 =================================================================================================================== 00:13:20.028 Total : 22345.20 87.29 0.00 0.00 0.00 0.00 0.00 00:13:20.028 00:13:20.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:20.982 Nvme0n1 : 6.00 22386.33 87.45 0.00 0.00 0.00 0.00 0.00 00:13:20.982 =================================================================================================================== 00:13:20.982 Total : 22386.33 87.45 0.00 0.00 0.00 0.00 0.00 00:13:20.982 00:13:21.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:21.915 Nvme0n1 : 7.00 22416.86 87.57 0.00 0.00 0.00 0.00 0.00 00:13:21.915 =================================================================================================================== 00:13:21.915 Total : 22416.86 87.57 0.00 0.00 0.00 0.00 0.00 00:13:21.915 00:13:23.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:23.288 Nvme0n1 : 8.00 22441.75 87.66 0.00 0.00 0.00 0.00 0.00 00:13:23.288 =================================================================================================================== 00:13:23.288 Total : 22441.75 87.66 0.00 0.00 0.00 0.00 0.00 00:13:23.288 00:13:24.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.263 Nvme0n1 : 9.00 22449.56 87.69 0.00 0.00 0.00 0.00 0.00 00:13:24.263 =================================================================================================================== 00:13:24.263 Total : 22449.56 87.69 0.00 0.00 0.00 0.00 0.00 00:13:24.263 00:13:25.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:25.196 Nvme0n1 : 10.00 22466.20 87.76 0.00 0.00 0.00 0.00 0.00 00:13:25.196 =================================================================================================================== 00:13:25.196 Total : 22466.20 87.76 0.00 0.00 0.00 0.00 0.00 00:13:25.196 00:13:25.196 00:13:25.197 Latency(us) 00:13:25.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:25.197 Nvme0n1 : 10.01 22466.66 87.76 0.00 0.00 5693.31 4388.06 13905.03 00:13:25.197 =================================================================================================================== 00:13:25.197 Total : 22466.66 87.76 0.00 0.00 5693.31 4388.06 13905.03 00:13:25.197 0 00:13:25.197 16:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 32816 00:13:25.197 16:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 32816 ']' 00:13:25.197 16:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 32816 00:13:25.197 16:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:25.197 16:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.197 16:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 32816 00:13:25.197 16:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:25.197 16:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:25.197 16:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 32816' 00:13:25.197 killing process with pid 32816 00:13:25.197 16:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 32816 00:13:25.197 Received shutdown signal, test time was about 10.000000 seconds 00:13:25.197 00:13:25.197 Latency(us) 00:13:25.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.197 =================================================================================================================== 00:13:25.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:25.197 16:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 32816 00:13:25.197 16:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:25.454 16:55:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:25.713 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:25.713 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:25.713 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:25.713 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:25.713 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 29548 00:13:25.713 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 29548 00:13:25.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 29548 Killed "${NVMF_APP[@]}" "$@" 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=34896 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 34896 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 34896 ']' 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.972 16:55:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:25.972 [2024-07-15 16:55:32.455357] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:13:25.972 [2024-07-15 16:55:32.455404] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.972 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.972 [2024-07-15 16:55:32.512621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.972 [2024-07-15 16:55:32.591077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.972 [2024-07-15 16:55:32.591110] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.972 [2024-07-15 16:55:32.591117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.972 [2024-07-15 16:55:32.591124] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.972 [2024-07-15 16:55:32.591129] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.972 [2024-07-15 16:55:32.591145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:26.907 [2024-07-15 16:55:33.449409] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:26.907 [2024-07-15 16:55:33.449503] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:26.907 [2024-07-15 16:55:33.449529] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e90366da-b897-4fbf-a0f2-136ea97b0c89 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=e90366da-b897-4fbf-a0f2-136ea97b0c89 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:26.907 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:27.165 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e90366da-b897-4fbf-a0f2-136ea97b0c89 -t 2000 00:13:27.165 [ 00:13:27.165 { 00:13:27.165 "name": "e90366da-b897-4fbf-a0f2-136ea97b0c89", 00:13:27.165 "aliases": [ 00:13:27.165 "lvs/lvol" 00:13:27.165 ], 00:13:27.165 "product_name": "Logical Volume", 00:13:27.165 "block_size": 4096, 00:13:27.165 "num_blocks": 38912, 00:13:27.165 "uuid": "e90366da-b897-4fbf-a0f2-136ea97b0c89", 00:13:27.165 "assigned_rate_limits": { 00:13:27.165 "rw_ios_per_sec": 0, 00:13:27.165 "rw_mbytes_per_sec": 0, 00:13:27.165 "r_mbytes_per_sec": 0, 00:13:27.165 "w_mbytes_per_sec": 0 00:13:27.165 }, 00:13:27.165 "claimed": false, 00:13:27.165 "zoned": false, 00:13:27.165 "supported_io_types": { 00:13:27.165 "read": true, 00:13:27.165 "write": true, 00:13:27.165 "unmap": true, 00:13:27.165 "flush": false, 00:13:27.165 "reset": true, 00:13:27.165 "nvme_admin": false, 00:13:27.165 "nvme_io": false, 00:13:27.165 "nvme_io_md": false, 00:13:27.165 "write_zeroes": true, 00:13:27.165 "zcopy": false, 00:13:27.165 "get_zone_info": false, 00:13:27.165 "zone_management": false, 00:13:27.165 "zone_append": false, 00:13:27.165 "compare": false, 00:13:27.165 "compare_and_write": false, 00:13:27.165 "abort": false, 00:13:27.165 "seek_hole": true, 00:13:27.165 "seek_data": true, 00:13:27.165 "copy": false, 00:13:27.165 "nvme_iov_md": false 00:13:27.165 }, 00:13:27.165 "driver_specific": { 00:13:27.165 "lvol": { 00:13:27.165 "lvol_store_uuid": "80d1750e-039b-4a25-9e1b-e0f178730ded", 00:13:27.165 "base_bdev": "aio_bdev", 00:13:27.165 "thin_provision": false, 00:13:27.165 "num_allocated_clusters": 38, 00:13:27.165 "snapshot": false, 00:13:27.165 "clone": false, 00:13:27.165 "esnap_clone": false 00:13:27.165 } 00:13:27.165 } 00:13:27.165 } 00:13:27.165 ] 00:13:27.165 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:27.165 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:27.165 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:27.423 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:27.423 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:27.423 16:55:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:27.681 [2024-07-15 16:55:34.293810] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:27.681 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:27.938 request: 00:13:27.938 { 00:13:27.938 "uuid": "80d1750e-039b-4a25-9e1b-e0f178730ded", 00:13:27.938 "method": "bdev_lvol_get_lvstores", 00:13:27.938 "req_id": 1 00:13:27.938 } 00:13:27.938 Got JSON-RPC error response 00:13:27.938 response: 00:13:27.938 { 00:13:27.938 "code": -19, 00:13:27.938 "message": "No such device" 00:13:27.938 } 00:13:27.938 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:27.938 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:27.938 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:27.938 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:27.938 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:28.195 aio_bdev 00:13:28.195 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e90366da-b897-4fbf-a0f2-136ea97b0c89 00:13:28.195 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=e90366da-b897-4fbf-a0f2-136ea97b0c89 00:13:28.195 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:28.195 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:28.195 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:28.195 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:28.195 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:28.195 16:55:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e90366da-b897-4fbf-a0f2-136ea97b0c89 -t 2000 00:13:28.455 [ 00:13:28.455 { 00:13:28.455 "name": "e90366da-b897-4fbf-a0f2-136ea97b0c89", 00:13:28.455 "aliases": [ 00:13:28.455 "lvs/lvol" 00:13:28.455 ], 00:13:28.455 "product_name": "Logical Volume", 00:13:28.455 "block_size": 4096, 00:13:28.455 "num_blocks": 38912, 00:13:28.455 "uuid": "e90366da-b897-4fbf-a0f2-136ea97b0c89", 00:13:28.455 "assigned_rate_limits": { 00:13:28.455 "rw_ios_per_sec": 0, 00:13:28.455 "rw_mbytes_per_sec": 0, 00:13:28.455 "r_mbytes_per_sec": 0, 00:13:28.455 "w_mbytes_per_sec": 0 00:13:28.455 }, 00:13:28.455 "claimed": false, 00:13:28.455 "zoned": false, 00:13:28.455 "supported_io_types": { 00:13:28.455 "read": true, 00:13:28.455 "write": true, 00:13:28.455 "unmap": true, 00:13:28.455 "flush": false, 00:13:28.455 "reset": true, 00:13:28.455 "nvme_admin": false, 00:13:28.455 "nvme_io": false, 00:13:28.455 "nvme_io_md": false, 00:13:28.455 "write_zeroes": true, 00:13:28.455 "zcopy": false, 00:13:28.455 "get_zone_info": false, 00:13:28.455 "zone_management": false, 00:13:28.455 "zone_append": false, 00:13:28.455 "compare": false, 00:13:28.455 "compare_and_write": false, 00:13:28.455 "abort": false, 00:13:28.455 "seek_hole": true, 00:13:28.455 "seek_data": true, 00:13:28.455 "copy": false, 00:13:28.455 "nvme_iov_md": false 00:13:28.455 }, 00:13:28.455 "driver_specific": { 00:13:28.455 "lvol": { 00:13:28.455 "lvol_store_uuid": "80d1750e-039b-4a25-9e1b-e0f178730ded", 00:13:28.455 "base_bdev": "aio_bdev", 00:13:28.455 "thin_provision": false, 00:13:28.455 "num_allocated_clusters": 38, 00:13:28.455 "snapshot": false, 00:13:28.455 "clone": false, 00:13:28.455 "esnap_clone": false 00:13:28.455 } 00:13:28.455 } 00:13:28.455 } 00:13:28.455 ] 00:13:28.455 16:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:28.455 16:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:28.455 16:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:28.713 16:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:28.713 16:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:28.713 16:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:28.713 16:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:28.713 16:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e90366da-b897-4fbf-a0f2-136ea97b0c89 00:13:28.971 16:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 80d1750e-039b-4a25-9e1b-e0f178730ded 00:13:29.229 16:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:29.229 16:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.487 00:13:29.487 real 0m17.614s 00:13:29.487 user 0m44.847s 00:13:29.487 sys 0m4.142s 00:13:29.487 16:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:29.487 16:55:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:29.487 ************************************ 00:13:29.487 END TEST lvs_grow_dirty 00:13:29.487 ************************************ 00:13:29.487 16:55:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:29.487 16:55:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:29.487 16:55:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:13:29.487 16:55:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:13:29.487 16:55:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:29.487 16:55:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:29.487 16:55:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:29.487 16:55:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:29.487 16:55:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:29.487 16:55:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:29.487 nvmf_trace.0 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:29.487 rmmod nvme_tcp 00:13:29.487 rmmod nvme_fabrics 00:13:29.487 rmmod nvme_keyring 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 34896 ']' 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 34896 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 34896 ']' 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 34896 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 34896 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 34896' 00:13:29.487 killing process with pid 34896 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 34896 00:13:29.487 16:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 34896 00:13:29.745 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:29.745 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:29.745 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:29.745 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.745 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:29.745 16:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.745 16:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.745 16:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.270 16:55:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:32.270 00:13:32.270 real 0m42.349s 00:13:32.270 user 1m5.965s 00:13:32.270 sys 0m9.965s 00:13:32.270 16:55:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:32.270 16:55:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:32.270 ************************************ 00:13:32.270 END TEST nvmf_lvs_grow 00:13:32.270 ************************************ 00:13:32.270 16:55:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:32.270 16:55:38 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:32.270 16:55:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:32.270 16:55:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:32.270 16:55:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:32.270 ************************************ 00:13:32.270 START TEST nvmf_bdev_io_wait 00:13:32.270 ************************************ 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:32.270 * Looking for test storage... 00:13:32.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:32.270 16:55:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:37.553 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:37.553 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:37.553 Found net devices under 0000:86:00.0: cvl_0_0 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:37.553 Found net devices under 0000:86:00.1: cvl_0_1 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.553 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:37.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:13:37.554 00:13:37.554 --- 10.0.0.2 ping statistics --- 00:13:37.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.554 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:13:37.554 00:13:37.554 --- 10.0.0.1 ping statistics --- 00:13:37.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.554 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=38960 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 38960 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 38960 ']' 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.554 16:55:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:37.554 [2024-07-15 16:55:43.909084] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:13:37.554 [2024-07-15 16:55:43.909128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.554 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.554 [2024-07-15 16:55:43.967999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.554 [2024-07-15 16:55:44.042152] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.554 [2024-07-15 16:55:44.042210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.554 [2024-07-15 16:55:44.042218] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.554 [2024-07-15 16:55:44.042228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.554 [2024-07-15 16:55:44.042233] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.554 [2024-07-15 16:55:44.042284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.554 [2024-07-15 16:55:44.042382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.554 [2024-07-15 16:55:44.042447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.554 [2024-07-15 16:55:44.042448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.119 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.119 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:13:38.119 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:38.119 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:38.119 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.119 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.119 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:38.119 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.119 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.119 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.119 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:38.119 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.119 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.377 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.377 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:38.377 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.377 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.377 [2024-07-15 16:55:44.836341] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.377 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.377 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:38.377 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.378 Malloc0 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.378 [2024-07-15 16:55:44.898041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=39209 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=39211 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:38.378 { 00:13:38.378 "params": { 00:13:38.378 "name": "Nvme$subsystem", 00:13:38.378 "trtype": "$TEST_TRANSPORT", 00:13:38.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:38.378 "adrfam": "ipv4", 00:13:38.378 "trsvcid": "$NVMF_PORT", 00:13:38.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:38.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:38.378 "hdgst": ${hdgst:-false}, 00:13:38.378 "ddgst": ${ddgst:-false} 00:13:38.378 }, 00:13:38.378 "method": "bdev_nvme_attach_controller" 00:13:38.378 } 00:13:38.378 EOF 00:13:38.378 )") 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=39213 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:38.378 { 00:13:38.378 "params": { 00:13:38.378 "name": "Nvme$subsystem", 00:13:38.378 "trtype": "$TEST_TRANSPORT", 00:13:38.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:38.378 "adrfam": "ipv4", 00:13:38.378 "trsvcid": "$NVMF_PORT", 00:13:38.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:38.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:38.378 "hdgst": ${hdgst:-false}, 00:13:38.378 "ddgst": ${ddgst:-false} 00:13:38.378 }, 00:13:38.378 "method": "bdev_nvme_attach_controller" 00:13:38.378 } 00:13:38.378 EOF 00:13:38.378 )") 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=39216 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:38.378 { 00:13:38.378 "params": { 00:13:38.378 "name": "Nvme$subsystem", 00:13:38.378 "trtype": "$TEST_TRANSPORT", 00:13:38.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:38.378 "adrfam": "ipv4", 00:13:38.378 "trsvcid": "$NVMF_PORT", 00:13:38.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:38.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:38.378 "hdgst": ${hdgst:-false}, 00:13:38.378 "ddgst": ${ddgst:-false} 00:13:38.378 }, 00:13:38.378 "method": "bdev_nvme_attach_controller" 00:13:38.378 } 00:13:38.378 EOF 00:13:38.378 )") 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:38.378 { 00:13:38.378 "params": { 00:13:38.378 "name": "Nvme$subsystem", 00:13:38.378 "trtype": "$TEST_TRANSPORT", 00:13:38.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:38.378 "adrfam": "ipv4", 00:13:38.378 "trsvcid": "$NVMF_PORT", 00:13:38.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:38.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:38.378 "hdgst": ${hdgst:-false}, 00:13:38.378 "ddgst": ${ddgst:-false} 00:13:38.378 }, 00:13:38.378 "method": "bdev_nvme_attach_controller" 00:13:38.378 } 00:13:38.378 EOF 00:13:38.378 )") 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 39209 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:38.378 "params": { 00:13:38.378 "name": "Nvme1", 00:13:38.378 "trtype": "tcp", 00:13:38.378 "traddr": "10.0.0.2", 00:13:38.378 "adrfam": "ipv4", 00:13:38.378 "trsvcid": "4420", 00:13:38.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.378 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.378 "hdgst": false, 00:13:38.378 "ddgst": false 00:13:38.378 }, 00:13:38.378 "method": "bdev_nvme_attach_controller" 00:13:38.378 }' 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:38.378 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:38.378 "params": { 00:13:38.378 "name": "Nvme1", 00:13:38.378 "trtype": "tcp", 00:13:38.378 "traddr": "10.0.0.2", 00:13:38.378 "adrfam": "ipv4", 00:13:38.378 "trsvcid": "4420", 00:13:38.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.379 "hdgst": false, 00:13:38.379 "ddgst": false 00:13:38.379 }, 00:13:38.379 "method": "bdev_nvme_attach_controller" 00:13:38.379 }' 00:13:38.379 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:38.379 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:38.379 "params": { 00:13:38.379 "name": "Nvme1", 00:13:38.379 "trtype": "tcp", 00:13:38.379 "traddr": "10.0.0.2", 00:13:38.379 "adrfam": "ipv4", 00:13:38.379 "trsvcid": "4420", 00:13:38.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.379 "hdgst": false, 00:13:38.379 "ddgst": false 00:13:38.379 }, 00:13:38.379 "method": "bdev_nvme_attach_controller" 00:13:38.379 }' 00:13:38.379 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:38.379 16:55:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:38.379 "params": { 00:13:38.379 "name": "Nvme1", 00:13:38.379 "trtype": "tcp", 00:13:38.379 "traddr": "10.0.0.2", 00:13:38.379 "adrfam": "ipv4", 00:13:38.379 "trsvcid": "4420", 00:13:38.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.379 "hdgst": false, 00:13:38.379 "ddgst": false 00:13:38.379 }, 00:13:38.379 "method": "bdev_nvme_attach_controller" 00:13:38.379 }' 00:13:38.379 [2024-07-15 16:55:44.945912] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:13:38.379 [2024-07-15 16:55:44.945966] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:38.379 [2024-07-15 16:55:44.949723] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:13:38.379 [2024-07-15 16:55:44.949722] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:13:38.379 [2024-07-15 16:55:44.949767] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 16:55:44.949768] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:38.379 --proc-type=auto ] 00:13:38.379 [2024-07-15 16:55:44.951913] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:13:38.379 [2024-07-15 16:55:44.951954] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:38.379 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.655 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.655 [2024-07-15 16:55:45.121558] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.655 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.655 [2024-07-15 16:55:45.174952] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.655 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.655 [2024-07-15 16:55:45.216554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:38.655 [2024-07-15 16:55:45.229591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.655 [2024-07-15 16:55:45.250666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:38.655 [2024-07-15 16:55:45.302965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:38.912 [2024-07-15 16:55:45.325396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.912 [2024-07-15 16:55:45.417988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:38.912 Running I/O for 1 seconds... 00:13:38.912 Running I/O for 1 seconds... 00:13:39.168 Running I/O for 1 seconds... 00:13:39.168 Running I/O for 1 seconds... 00:13:40.097 00:13:40.097 Latency(us) 00:13:40.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.097 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:40.097 Nvme1n1 : 1.01 12178.34 47.57 0.00 0.00 10475.79 5955.23 18008.15 00:13:40.097 =================================================================================================================== 00:13:40.097 Total : 12178.34 47.57 0.00 0.00 10475.79 5955.23 18008.15 00:13:40.097 00:13:40.097 Latency(us) 00:13:40.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.097 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:40.097 Nvme1n1 : 1.01 10948.50 42.77 0.00 0.00 11652.57 6012.22 20287.67 00:13:40.097 =================================================================================================================== 00:13:40.097 Total : 10948.50 42.77 0.00 0.00 11652.57 6012.22 20287.67 00:13:40.097 00:13:40.097 Latency(us) 00:13:40.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.097 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:40.097 Nvme1n1 : 1.01 10691.74 41.76 0.00 0.00 11933.49 5755.77 22909.11 00:13:40.097 =================================================================================================================== 00:13:40.097 Total : 10691.74 41.76 0.00 0.00 11933.49 5755.77 22909.11 00:13:40.097 00:13:40.097 Latency(us) 00:13:40.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.097 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:40.097 Nvme1n1 : 1.00 244595.10 955.45 0.00 0.00 521.11 214.59 666.05 00:13:40.097 =================================================================================================================== 00:13:40.097 Total : 244595.10 955.45 0.00 0.00 521.11 214.59 666.05 00:13:40.097 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 39211 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 39213 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 39216 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:40.355 rmmod nvme_tcp 00:13:40.355 rmmod nvme_fabrics 00:13:40.355 rmmod nvme_keyring 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 38960 ']' 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 38960 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 38960 ']' 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 38960 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 38960 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 38960' 00:13:40.355 killing process with pid 38960 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 38960 00:13:40.355 16:55:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 38960 00:13:40.644 16:55:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:40.644 16:55:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:40.644 16:55:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:40.644 16:55:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.644 16:55:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:40.644 16:55:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.644 16:55:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.644 16:55:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.176 16:55:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:43.176 00:13:43.176 real 0m10.821s 00:13:43.176 user 0m19.444s 00:13:43.176 sys 0m5.697s 00:13:43.176 16:55:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.176 16:55:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:43.176 ************************************ 00:13:43.176 END TEST nvmf_bdev_io_wait 00:13:43.176 ************************************ 00:13:43.176 16:55:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:43.176 16:55:49 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:43.176 16:55:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:43.176 16:55:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.176 16:55:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:43.176 ************************************ 00:13:43.176 START TEST nvmf_queue_depth 00:13:43.176 ************************************ 00:13:43.176 16:55:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:43.176 * Looking for test storage... 00:13:43.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.176 16:55:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.176 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:43.177 16:55:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.451 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:48.452 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:48.452 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:48.452 Found net devices under 0000:86:00.0: cvl_0_0 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:48.452 Found net devices under 0000:86:00.1: cvl_0_1 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:48.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:13:48.452 00:13:48.452 --- 10.0.0.2 ping statistics --- 00:13:48.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.452 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:13:48.452 00:13:48.452 --- 10.0.0.1 ping statistics --- 00:13:48.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.452 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:48.452 16:55:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:48.452 16:55:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=42996 00:13:48.452 16:55:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 42996 00:13:48.453 16:55:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:48.453 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 42996 ']' 00:13:48.453 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.453 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.453 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.453 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.453 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:48.453 [2024-07-15 16:55:55.049934] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:13:48.453 [2024-07-15 16:55:55.049976] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.453 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.453 [2024-07-15 16:55:55.106007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.724 [2024-07-15 16:55:55.185618] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.724 [2024-07-15 16:55:55.185654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.724 [2024-07-15 16:55:55.185661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.724 [2024-07-15 16:55:55.185668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.724 [2024-07-15 16:55:55.185673] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.724 [2024-07-15 16:55:55.185692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.289 [2024-07-15 16:55:55.884614] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.289 Malloc0 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.289 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.290 [2024-07-15 16:55:55.939968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=43142 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 43142 /var/tmp/bdevperf.sock 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 43142 ']' 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.290 16:55:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.547 [2024-07-15 16:55:55.989316] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:13:49.547 [2024-07-15 16:55:55.989358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43142 ] 00:13:49.547 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.547 [2024-07-15 16:55:56.043522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.547 [2024-07-15 16:55:56.116832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.477 16:55:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.477 16:55:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:50.477 16:55:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:50.477 16:55:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.477 16:55:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.477 NVMe0n1 00:13:50.477 16:55:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.477 16:55:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:50.477 Running I/O for 10 seconds... 00:14:02.659 00:14:02.659 Latency(us) 00:14:02.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.659 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:02.659 Verification LBA range: start 0x0 length 0x4000 00:14:02.659 NVMe0n1 : 10.06 12469.73 48.71 0.00 0.00 81822.67 19831.76 55848.07 00:14:02.659 =================================================================================================================== 00:14:02.659 Total : 12469.73 48.71 0.00 0.00 81822.67 19831.76 55848.07 00:14:02.659 0 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 43142 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 43142 ']' 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 43142 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 43142 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 43142' 00:14:02.659 killing process with pid 43142 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 43142 00:14:02.659 Received shutdown signal, test time was about 10.000000 seconds 00:14:02.659 00:14:02.659 Latency(us) 00:14:02.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.659 =================================================================================================================== 00:14:02.659 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 43142 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.659 rmmod nvme_tcp 00:14:02.659 rmmod nvme_fabrics 00:14:02.659 rmmod nvme_keyring 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 42996 ']' 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 42996 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 42996 ']' 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 42996 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 42996 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 42996' 00:14:02.659 killing process with pid 42996 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 42996 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 42996 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:02.659 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:02.660 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.660 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:02.660 16:56:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.660 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.660 16:56:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.225 16:56:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:03.225 00:14:03.225 real 0m20.530s 00:14:03.225 user 0m25.297s 00:14:03.225 sys 0m5.635s 00:14:03.225 16:56:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:03.225 16:56:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:03.225 ************************************ 00:14:03.225 END TEST nvmf_queue_depth 00:14:03.225 ************************************ 00:14:03.225 16:56:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:03.225 16:56:09 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:03.225 16:56:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:03.225 16:56:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.225 16:56:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:03.482 ************************************ 00:14:03.482 START TEST nvmf_target_multipath 00:14:03.482 ************************************ 00:14:03.482 16:56:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:03.482 * Looking for test storage... 00:14:03.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.482 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:03.483 16:56:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:08.746 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:08.746 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:08.746 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:08.747 Found net devices under 0000:86:00.0: cvl_0_0 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:08.747 Found net devices under 0000:86:00.1: cvl_0_1 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:08.747 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:14:09.006 00:14:09.006 --- 10.0.0.2 ping statistics --- 00:14:09.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.006 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:14:09.006 00:14:09.006 --- 10.0.0.1 ping statistics --- 00:14:09.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.006 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:09.006 only one NIC for nvmf test 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.006 rmmod nvme_tcp 00:14:09.006 rmmod nvme_fabrics 00:14:09.006 rmmod nvme_keyring 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.006 16:56:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.534 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:11.534 16:56:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:11.534 16:56:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:11.534 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:11.534 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:11.534 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:11.534 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:11.535 00:14:11.535 real 0m7.710s 00:14:11.535 user 0m1.461s 00:14:11.535 sys 0m4.191s 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.535 16:56:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:11.535 ************************************ 00:14:11.535 END TEST nvmf_target_multipath 00:14:11.535 ************************************ 00:14:11.535 16:56:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:11.535 16:56:17 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:11.535 16:56:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:11.535 16:56:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.535 16:56:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:11.535 ************************************ 00:14:11.535 START TEST nvmf_zcopy 00:14:11.535 ************************************ 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:11.535 * Looking for test storage... 00:14:11.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:11.535 16:56:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:16.791 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:16.791 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:16.791 Found net devices under 0000:86:00.0: cvl_0_0 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.791 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:16.791 Found net devices under 0000:86:00.1: cvl_0_1 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:16.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:14:16.792 00:14:16.792 --- 10.0.0.2 ping statistics --- 00:14:16.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.792 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:14:16.792 00:14:16.792 --- 10.0.0.1 ping statistics --- 00:14:16.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.792 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=51894 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 51894 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 51894 ']' 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.792 16:56:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:16.792 [2024-07-15 16:56:23.451215] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:14:16.792 [2024-07-15 16:56:23.451266] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.049 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.049 [2024-07-15 16:56:23.506521] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.049 [2024-07-15 16:56:23.582946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.049 [2024-07-15 16:56:23.582985] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.049 [2024-07-15 16:56:23.582992] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.049 [2024-07-15 16:56:23.582998] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.049 [2024-07-15 16:56:23.583003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.049 [2024-07-15 16:56:23.583024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.612 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.612 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:17.612 16:56:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.612 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.612 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:17.612 16:56:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.612 16:56:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:17.870 [2024-07-15 16:56:24.287176] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:17.870 [2024-07-15 16:56:24.307355] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:17.870 malloc0 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:17.870 { 00:14:17.870 "params": { 00:14:17.870 "name": "Nvme$subsystem", 00:14:17.870 "trtype": "$TEST_TRANSPORT", 00:14:17.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:17.870 "adrfam": "ipv4", 00:14:17.870 "trsvcid": "$NVMF_PORT", 00:14:17.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:17.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:17.870 "hdgst": ${hdgst:-false}, 00:14:17.870 "ddgst": ${ddgst:-false} 00:14:17.870 }, 00:14:17.870 "method": "bdev_nvme_attach_controller" 00:14:17.870 } 00:14:17.870 EOF 00:14:17.870 )") 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:17.870 16:56:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:17.870 "params": { 00:14:17.870 "name": "Nvme1", 00:14:17.870 "trtype": "tcp", 00:14:17.870 "traddr": "10.0.0.2", 00:14:17.870 "adrfam": "ipv4", 00:14:17.870 "trsvcid": "4420", 00:14:17.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:17.870 "hdgst": false, 00:14:17.870 "ddgst": false 00:14:17.870 }, 00:14:17.870 "method": "bdev_nvme_attach_controller" 00:14:17.870 }' 00:14:17.870 [2024-07-15 16:56:24.387045] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:14:17.870 [2024-07-15 16:56:24.387089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52140 ] 00:14:17.870 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.870 [2024-07-15 16:56:24.440228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.870 [2024-07-15 16:56:24.513341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.128 Running I/O for 10 seconds... 00:14:28.117 00:14:28.117 Latency(us) 00:14:28.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.117 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:28.117 Verification LBA range: start 0x0 length 0x1000 00:14:28.117 Nvme1n1 : 10.01 8610.18 67.27 0.00 0.00 14822.90 470.15 25416.57 00:14:28.117 =================================================================================================================== 00:14:28.117 Total : 8610.18 67.27 0.00 0.00 14822.90 470.15 25416.57 00:14:28.374 16:56:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=53751 00:14:28.374 16:56:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:28.374 16:56:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:28.374 16:56:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:28.374 16:56:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:28.374 16:56:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:28.374 16:56:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:28.374 16:56:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:28.374 16:56:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:28.374 { 00:14:28.375 "params": { 00:14:28.375 "name": "Nvme$subsystem", 00:14:28.375 "trtype": "$TEST_TRANSPORT", 00:14:28.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:28.375 "adrfam": "ipv4", 00:14:28.375 "trsvcid": "$NVMF_PORT", 00:14:28.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:28.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:28.375 "hdgst": ${hdgst:-false}, 00:14:28.375 "ddgst": ${ddgst:-false} 00:14:28.375 }, 00:14:28.375 "method": "bdev_nvme_attach_controller" 00:14:28.375 } 00:14:28.375 EOF 00:14:28.375 )") 00:14:28.375 [2024-07-15 16:56:34.897532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.375 [2024-07-15 16:56:34.897570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.375 16:56:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:28.375 16:56:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:28.375 16:56:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:28.375 16:56:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:28.375 "params": { 00:14:28.375 "name": "Nvme1", 00:14:28.375 "trtype": "tcp", 00:14:28.375 "traddr": "10.0.0.2", 00:14:28.375 "adrfam": "ipv4", 00:14:28.375 "trsvcid": "4420", 00:14:28.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:28.375 "hdgst": false, 00:14:28.375 "ddgst": false 00:14:28.375 }, 00:14:28.375 "method": "bdev_nvme_attach_controller" 00:14:28.375 }' 00:14:28.375 [2024-07-15 16:56:34.909531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.375 [2024-07-15 16:56:34.909545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.375 [2024-07-15 16:56:34.921557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.375 [2024-07-15 16:56:34.921567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.375 [2024-07-15 16:56:34.933589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.375 [2024-07-15 16:56:34.933599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.375 [2024-07-15 16:56:34.938498] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:14:28.375 [2024-07-15 16:56:34.938539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53751 ] 00:14:28.375 [2024-07-15 16:56:34.945621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.375 [2024-07-15 16:56:34.945633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.375 [2024-07-15 16:56:34.957652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.375 [2024-07-15 16:56:34.957662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.375 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.375 [2024-07-15 16:56:34.969684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.375 [2024-07-15 16:56:34.969695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.375 [2024-07-15 16:56:34.981719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.375 [2024-07-15 16:56:34.981730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.375 [2024-07-15 16:56:34.993748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.375 [2024-07-15 16:56:34.993759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.375 [2024-07-15 16:56:34.994004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.375 [2024-07-15 16:56:35.005787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.375 [2024-07-15 16:56:35.005800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.375 [2024-07-15 16:56:35.017816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.375 [2024-07-15 16:56:35.017826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.375 [2024-07-15 16:56:35.029849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.375 [2024-07-15 16:56:35.029861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.375 [2024-07-15 16:56:35.041887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.375 [2024-07-15 16:56:35.041908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.053914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.053924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.065949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.065959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.070193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.631 [2024-07-15 16:56:35.077978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.077989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.090018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.090039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.102049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.102061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.114076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.114087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.126111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.126122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.138141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.138152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.150171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.150180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.162221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.162250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.174249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.174279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.186285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.186298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.198311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.198321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.210340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.210350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.222381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.222396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.631 [2024-07-15 16:56:35.234415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.631 [2024-07-15 16:56:35.234429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.632 [2024-07-15 16:56:35.246444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.632 [2024-07-15 16:56:35.246456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.632 [2024-07-15 16:56:35.258484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.632 [2024-07-15 16:56:35.258502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.632 Running I/O for 5 seconds... 00:14:28.632 [2024-07-15 16:56:35.270510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.632 [2024-07-15 16:56:35.270520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.632 [2024-07-15 16:56:35.286338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.632 [2024-07-15 16:56:35.286358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.632 [2024-07-15 16:56:35.295265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.632 [2024-07-15 16:56:35.295289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.310140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.310159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.325947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.325967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.340035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.340053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.349068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.349086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.363315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.363333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.376968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.376985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.386009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.386027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.400399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.400417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.409407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.409425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.423879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.423896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.432736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.432754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.441515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.441534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.456332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.456351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.467073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.467091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.888 [2024-07-15 16:56:35.481393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.888 [2024-07-15 16:56:35.481411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.889 [2024-07-15 16:56:35.490515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.889 [2024-07-15 16:56:35.490534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.889 [2024-07-15 16:56:35.504342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.889 [2024-07-15 16:56:35.504361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.889 [2024-07-15 16:56:35.517671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.889 [2024-07-15 16:56:35.517689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.889 [2024-07-15 16:56:35.526692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.889 [2024-07-15 16:56:35.526710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.889 [2024-07-15 16:56:35.540942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.889 [2024-07-15 16:56:35.540961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.889 [2024-07-15 16:56:35.549703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.889 [2024-07-15 16:56:35.549720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.144 [2024-07-15 16:56:35.564648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.144 [2024-07-15 16:56:35.564667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.144 [2024-07-15 16:56:35.579601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.144 [2024-07-15 16:56:35.579619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.144 [2024-07-15 16:56:35.588528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.144 [2024-07-15 16:56:35.588546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.144 [2024-07-15 16:56:35.598015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.144 [2024-07-15 16:56:35.598032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.144 [2024-07-15 16:56:35.607208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.144 [2024-07-15 16:56:35.607231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.144 [2024-07-15 16:56:35.621729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.144 [2024-07-15 16:56:35.621748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.144 [2024-07-15 16:56:35.635568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.144 [2024-07-15 16:56:35.635586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.144 [2024-07-15 16:56:35.644577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.144 [2024-07-15 16:56:35.644596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.144 [2024-07-15 16:56:35.653881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.144 [2024-07-15 16:56:35.653899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.144 [2024-07-15 16:56:35.663010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.144 [2024-07-15 16:56:35.663028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.144 [2024-07-15 16:56:35.672379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.145 [2024-07-15 16:56:35.672397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.145 [2024-07-15 16:56:35.686646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.145 [2024-07-15 16:56:35.686664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.145 [2024-07-15 16:56:35.701009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.145 [2024-07-15 16:56:35.701026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.145 [2024-07-15 16:56:35.716864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.145 [2024-07-15 16:56:35.716884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.145 [2024-07-15 16:56:35.725860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.145 [2024-07-15 16:56:35.725878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.145 [2024-07-15 16:56:35.735246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.145 [2024-07-15 16:56:35.735265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.145 [2024-07-15 16:56:35.749807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.145 [2024-07-15 16:56:35.749825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.145 [2024-07-15 16:56:35.758771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.145 [2024-07-15 16:56:35.758789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.145 [2024-07-15 16:56:35.772923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.145 [2024-07-15 16:56:35.772940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.145 [2024-07-15 16:56:35.786705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.145 [2024-07-15 16:56:35.786723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.145 [2024-07-15 16:56:35.795917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.145 [2024-07-15 16:56:35.795934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.145 [2024-07-15 16:56:35.809989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.145 [2024-07-15 16:56:35.810008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.401 [2024-07-15 16:56:35.823831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.401 [2024-07-15 16:56:35.823849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.401 [2024-07-15 16:56:35.832636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.401 [2024-07-15 16:56:35.832653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.401 [2024-07-15 16:56:35.841956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.401 [2024-07-15 16:56:35.841974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.401 [2024-07-15 16:56:35.856292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.401 [2024-07-15 16:56:35.856309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.401 [2024-07-15 16:56:35.870028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.401 [2024-07-15 16:56:35.870046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.401 [2024-07-15 16:56:35.878792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.401 [2024-07-15 16:56:35.878809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.401 [2024-07-15 16:56:35.888011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.401 [2024-07-15 16:56:35.888028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.401 [2024-07-15 16:56:35.897379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.401 [2024-07-15 16:56:35.897397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.401 [2024-07-15 16:56:35.911801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.401 [2024-07-15 16:56:35.911820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.401 [2024-07-15 16:56:35.926166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.401 [2024-07-15 16:56:35.926184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.401 [2024-07-15 16:56:35.937145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.401 [2024-07-15 16:56:35.937164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.401 [2024-07-15 16:56:35.946351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.401 [2024-07-15 16:56:35.946369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.402 [2024-07-15 16:56:35.955655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.402 [2024-07-15 16:56:35.955673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.402 [2024-07-15 16:56:35.965023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.402 [2024-07-15 16:56:35.965041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.402 [2024-07-15 16:56:35.979660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.402 [2024-07-15 16:56:35.979677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.402 [2024-07-15 16:56:35.993343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.402 [2024-07-15 16:56:35.993361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.402 [2024-07-15 16:56:36.002364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.402 [2024-07-15 16:56:36.002382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.402 [2024-07-15 16:56:36.016644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.402 [2024-07-15 16:56:36.016663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.402 [2024-07-15 16:56:36.024587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.402 [2024-07-15 16:56:36.024605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.402 [2024-07-15 16:56:36.038326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.402 [2024-07-15 16:56:36.038344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.402 [2024-07-15 16:56:36.051861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.402 [2024-07-15 16:56:36.051879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.402 [2024-07-15 16:56:36.060506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.402 [2024-07-15 16:56:36.060523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.074799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.074817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.088380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.088398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.102389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.102407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.116179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.116197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.130219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.130242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.139049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.139067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.153500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.153517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.167660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.167678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.181393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.181411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.190353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.190371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.199706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.199724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.214009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.214027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.228034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.228052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.237282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.237300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.245844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.245862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.255214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.255237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.270205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.270223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.285567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.285584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.299964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.299981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.306705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.306722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.658 [2024-07-15 16:56:36.320632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.658 [2024-07-15 16:56:36.320651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.914 [2024-07-15 16:56:36.334231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.914 [2024-07-15 16:56:36.334251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.914 [2024-07-15 16:56:36.348001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.348020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.361728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.361749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.375664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.375685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.384496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.384516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.398726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.398746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.412615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.412634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.421709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.421732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.430691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.430710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.440364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.440382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.455173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.455191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.466007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.466026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.474790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.474808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.484123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.484142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.498383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.498401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.512172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.512190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.526532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.526550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.540348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.540368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.553908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.553927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.562940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.562959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.915 [2024-07-15 16:56:36.577362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.915 [2024-07-15 16:56:36.577381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.591151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.591171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.600017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.600036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.609220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.609244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.624262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.624281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.639637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.639656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.653746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.653770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.667337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.667356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.681571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.681589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.690476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.690495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.699803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.699821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.714386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.714404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.728130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.728149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.737049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.737067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.751577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.751596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.760570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.760588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.775217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.775242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.785970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.785988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.800431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.800449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.813715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.813733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.171 [2024-07-15 16:56:36.828012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.171 [2024-07-15 16:56:36.828030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.842122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.842140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.849534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.849552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.863459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.863488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.872389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.872407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.887060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.887083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.897691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.897709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.911993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.912012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.925956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.925974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.934780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.934798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.949295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.949313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.962704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.962723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.971706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.971724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.981133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.981150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.990407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.990425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:36.999703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:36.999720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:37.014490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:37.014509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:37.025548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:37.025566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:37.034831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:37.034849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:37.043550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:37.043567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:37.052170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:37.052188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:37.066750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:37.066768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:37.075727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:37.075744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.428 [2024-07-15 16:56:37.090278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.428 [2024-07-15 16:56:37.090295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.100988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.101010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.115635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.115654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.129625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.129643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.138519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.138537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.147599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.147617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.162615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.162633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.178389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.178406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.192447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.192465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.201273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.201291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.215909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.215927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.226723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.226740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.236095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.236112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.250798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.250816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.264777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.264795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.273836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.273854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.283183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.283201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.297937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.297955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.308619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.308637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.322801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.322819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.330575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.330592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.339749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.339783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.686 [2024-07-15 16:56:37.348490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.686 [2024-07-15 16:56:37.348508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.363094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.363112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.374089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.374107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.382974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.382991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.397313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.397331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.411195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.411214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.424990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.425009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.433973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.433990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.442680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.442698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.452013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.452030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.461447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.461465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.476105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.476125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.484956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.484975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.499626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.499645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.510502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.510520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.524723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.524741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.538132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.538150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.552011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.552029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.566002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.566020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.579952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.579970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.594103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.594121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.943 [2024-07-15 16:56:37.608163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.943 [2024-07-15 16:56:37.608182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.621683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.621701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.630571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.630588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.639757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.639774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.654036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.654054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.667748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.667766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.676869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.676888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.691119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.691137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.704212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.704235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.718397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.718415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.732343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.732361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.746077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.746096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.759849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.759869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.773933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.773952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.782849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.782868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.797647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.797667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.813153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.813172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.821970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.821989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.831343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.831362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.845592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.201 [2024-07-15 16:56:37.845610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.201 [2024-07-15 16:56:37.859054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.202 [2024-07-15 16:56:37.859073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.202 [2024-07-15 16:56:37.868113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.202 [2024-07-15 16:56:37.868131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:37.882451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:37.882470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:37.891276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:37.891296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:37.900205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:37.900223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:37.914848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:37.914866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:37.928995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:37.929013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:37.940004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:37.940022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:37.954671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:37.954690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:37.965569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:37.965588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:37.975446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:37.975465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:37.989629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:37.989648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:38.003903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:38.003921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:38.019801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:38.019820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:38.033668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:38.033687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:38.047329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:38.047347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:38.061100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:38.061119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:38.074771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:38.074789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:38.088477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:38.088495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:38.102565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:38.102583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.459 [2024-07-15 16:56:38.116595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.459 [2024-07-15 16:56:38.116614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.460 [2024-07-15 16:56:38.125926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.460 [2024-07-15 16:56:38.125945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.716 [2024-07-15 16:56:38.140510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.716 [2024-07-15 16:56:38.140529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.716 [2024-07-15 16:56:38.151706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.716 [2024-07-15 16:56:38.151725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.716 [2024-07-15 16:56:38.165861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.716 [2024-07-15 16:56:38.165879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.716 [2024-07-15 16:56:38.179378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.716 [2024-07-15 16:56:38.179396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.716 [2024-07-15 16:56:38.193359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.716 [2024-07-15 16:56:38.193378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.716 [2024-07-15 16:56:38.202336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.716 [2024-07-15 16:56:38.202354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.716 [2024-07-15 16:56:38.216567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.717 [2024-07-15 16:56:38.216586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.717 [2024-07-15 16:56:38.231057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.717 [2024-07-15 16:56:38.231075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.717 [2024-07-15 16:56:38.246197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.717 [2024-07-15 16:56:38.246214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.717 [2024-07-15 16:56:38.260188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.717 [2024-07-15 16:56:38.260207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.717 [2024-07-15 16:56:38.274563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.717 [2024-07-15 16:56:38.274586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.717 [2024-07-15 16:56:38.285414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.717 [2024-07-15 16:56:38.285433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.717 [2024-07-15 16:56:38.294477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.717 [2024-07-15 16:56:38.294495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.717 [2024-07-15 16:56:38.308993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.717 [2024-07-15 16:56:38.309012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.717 [2024-07-15 16:56:38.322464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.717 [2024-07-15 16:56:38.322481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.717 [2024-07-15 16:56:38.337052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.717 [2024-07-15 16:56:38.337071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.717 [2024-07-15 16:56:38.353270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.717 [2024-07-15 16:56:38.353290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.717 [2024-07-15 16:56:38.362237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.717 [2024-07-15 16:56:38.362255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.717 [2024-07-15 16:56:38.376706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.717 [2024-07-15 16:56:38.376724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.390589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.390608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.401153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.401171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.415236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.415254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.423992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.424010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.438703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.438721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.452203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.452221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.466624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.466642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.477426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.477444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.492239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.492257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.507769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.507788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.516675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.516702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.525965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.525982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.535202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.535220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.549240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.549259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.562710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.562728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.571638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.571656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.580728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.580746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.590106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.590123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.599482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.599499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.614302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.614320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.625131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.625150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.974 [2024-07-15 16:56:38.639588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.974 [2024-07-15 16:56:38.639606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.654182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.654199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.665799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.665816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.680348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.680366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.691296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.691314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.700138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.700155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.709329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.709347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.718656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.718673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.733736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.733758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.744550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.744568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.753869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.753888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.763155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.763173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.772500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.772517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.786942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.786960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.801336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.801354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.817606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.817624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.831338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.831357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.845342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.845360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.856248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.856266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.865527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.865546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.874803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.874821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.232 [2024-07-15 16:56:38.889432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.232 [2024-07-15 16:56:38.889450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:38.903858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:38.903878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:38.914450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:38.914468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:38.923938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:38.923956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:38.933032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:38.933051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:38.942058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:38.942076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:38.956561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:38.956583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:38.970063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:38.970081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:38.979295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:38.979314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:38.993944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:38.993962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:39.007667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:39.007685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:39.016724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:39.016742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:39.031158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:39.031178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:39.046133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:39.046151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:39.061167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:39.061186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:39.070156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:39.070174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:39.078827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:39.078845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:39.093660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:39.093679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:39.108790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:39.108807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.489 [2024-07-15 16:56:39.123522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.489 [2024-07-15 16:56:39.123540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.490 [2024-07-15 16:56:39.138581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.490 [2024-07-15 16:56:39.138599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.490 [2024-07-15 16:56:39.152924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.490 [2024-07-15 16:56:39.152943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.163869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.163888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.178277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.178297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.192057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.192078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.206744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.206765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.221988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.222008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.236715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.236734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.245707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.245725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.254590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.254607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.269525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.269543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.280482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.280500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.295446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.295465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.311410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.311429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.325535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.325553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.339809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.339828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.351025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.351044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.365177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.365196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.374207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.374231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.383023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.383041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.392464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.392492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.747 [2024-07-15 16:56:39.406958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.747 [2024-07-15 16:56:39.406976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.004 [2024-07-15 16:56:39.421014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.004 [2024-07-15 16:56:39.421033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.004 [2024-07-15 16:56:39.435638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.004 [2024-07-15 16:56:39.435657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.004 [2024-07-15 16:56:39.443345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.004 [2024-07-15 16:56:39.443363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.004 [2024-07-15 16:56:39.457038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.004 [2024-07-15 16:56:39.457056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.004 [2024-07-15 16:56:39.466150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.004 [2024-07-15 16:56:39.466169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.004 [2024-07-15 16:56:39.475666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.004 [2024-07-15 16:56:39.475684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.004 [2024-07-15 16:56:39.484539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.484558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.493916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.493935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.503182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.503200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.511895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.511915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.526932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.526952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.538293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.538312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.552359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.552379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.566523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.566541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.577188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.577206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.591510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.591528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.605085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.605103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.614062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.614079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.628506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.628524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.641819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.641837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.656346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.656365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.005 [2024-07-15 16:56:39.667363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.005 [2024-07-15 16:56:39.667381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.262 [2024-07-15 16:56:39.681606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.262 [2024-07-15 16:56:39.681625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.262 [2024-07-15 16:56:39.690506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.262 [2024-07-15 16:56:39.690524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.262 [2024-07-15 16:56:39.699586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.262 [2024-07-15 16:56:39.699604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.262 [2024-07-15 16:56:39.714163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.262 [2024-07-15 16:56:39.714181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.262 [2024-07-15 16:56:39.723276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.262 [2024-07-15 16:56:39.723294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.737547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.737565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.751377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.751396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.765379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.765397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.779400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.779419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.793685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.793703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.804437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.804455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.818957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.818975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.829603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.829620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.843974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.843992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.857526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.857544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.866418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.866436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.881377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.881395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.896724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.896742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.910781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.910798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.263 [2024-07-15 16:56:39.924992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.263 [2024-07-15 16:56:39.925010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:39.936487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:39.936505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:39.950430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:39.950448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:39.959600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:39.959618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:39.973734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:39.973752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:39.986742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:39.986761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:39.995678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:39.995696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.010304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.010322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.023318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.023339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.037941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.037960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.046871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.046891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.061389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.061408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.075118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.075137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.090836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.090854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.107028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.107046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.120977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.120995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.134683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.134701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.148357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.148379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.162282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.162300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.176515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.176533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.520 [2024-07-15 16:56:40.184291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.520 [2024-07-15 16:56:40.184309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.198047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.198066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.212040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.212058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.225970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.225989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.239871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.239900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.254015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.254032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.268036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.268054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.276598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.276616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 00:14:33.778 Latency(us) 00:14:33.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.778 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:33.778 Nvme1n1 : 5.01 16672.48 130.25 0.00 0.00 7670.35 3319.54 18122.13 00:14:33.778 =================================================================================================================== 00:14:33.778 Total : 16672.48 130.25 0.00 0.00 7670.35 3319.54 18122.13 00:14:33.778 [2024-07-15 16:56:40.286946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.286963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.298966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.298980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.311008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.311024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.323039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.323055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.335066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.335080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.347099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.347119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.359127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.359142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.371165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.371181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.383199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.383212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.395234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.395243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.407269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.407281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.427314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.427326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.778 [2024-07-15 16:56:40.435335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.778 [2024-07-15 16:56:40.435344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.036 [2024-07-15 16:56:40.447374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.036 [2024-07-15 16:56:40.447386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.036 [2024-07-15 16:56:40.459403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.036 [2024-07-15 16:56:40.459412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (53751) - No such process 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 53751 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.036 delay0 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.036 16:56:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:34.036 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.036 [2024-07-15 16:56:40.584820] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:42.142 Initializing NVMe Controllers 00:14:42.142 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:42.142 Initialization complete. Launching workers. 00:14:42.143 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 7226 00:14:42.143 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7494, failed to submit 52 00:14:42.143 success 7339, unsuccess 155, failed 0 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:42.143 rmmod nvme_tcp 00:14:42.143 rmmod nvme_fabrics 00:14:42.143 rmmod nvme_keyring 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 51894 ']' 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 51894 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 51894 ']' 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 51894 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 51894 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51894' 00:14:42.143 killing process with pid 51894 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 51894 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 51894 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.143 16:56:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.516 16:56:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:43.516 00:14:43.516 real 0m32.241s 00:14:43.516 user 0m43.614s 00:14:43.516 sys 0m11.105s 00:14:43.516 16:56:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:43.516 16:56:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:43.516 ************************************ 00:14:43.516 END TEST nvmf_zcopy 00:14:43.516 ************************************ 00:14:43.516 16:56:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:43.516 16:56:49 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:43.516 16:56:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:43.516 16:56:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.516 16:56:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:43.516 ************************************ 00:14:43.516 START TEST nvmf_nmic 00:14:43.516 ************************************ 00:14:43.516 16:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:43.516 * Looking for test storage... 00:14:43.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.516 16:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.516 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:43.516 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:14:43.517 16:56:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:48.767 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.767 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:48.767 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:48.768 Found net devices under 0000:86:00.0: cvl_0_0 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:48.768 Found net devices under 0000:86:00.1: cvl_0_1 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:48.768 16:56:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:48.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:14:48.768 00:14:48.768 --- 10.0.0.2 ping statistics --- 00:14:48.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.768 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:14:48.768 00:14:48.768 --- 10.0.0.1 ping statistics --- 00:14:48.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.768 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=59319 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 59319 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 59319 ']' 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:48.768 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:48.768 [2024-07-15 16:56:55.162359] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:14:48.768 [2024-07-15 16:56:55.162406] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.768 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.768 [2024-07-15 16:56:55.219236] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.768 [2024-07-15 16:56:55.300024] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.768 [2024-07-15 16:56:55.300058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.768 [2024-07-15 16:56:55.300065] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.768 [2024-07-15 16:56:55.300070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.768 [2024-07-15 16:56:55.300075] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.768 [2024-07-15 16:56:55.300124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.768 [2024-07-15 16:56:55.300259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.768 [2024-07-15 16:56:55.300333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.768 [2024-07-15 16:56:55.300335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.332 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.332 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:14:49.332 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:49.332 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:49.332 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.332 16:56:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.332 16:56:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:49.332 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.332 16:56:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.332 [2024-07-15 16:56:56.001062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.590 Malloc0 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.590 [2024-07-15 16:56:56.053031] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:49.590 test case1: single bdev can't be used in multiple subsystems 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.590 [2024-07-15 16:56:56.076951] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:49.590 [2024-07-15 16:56:56.076969] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:49.590 [2024-07-15 16:56:56.076976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.590 request: 00:14:49.590 { 00:14:49.590 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:49.590 "namespace": { 00:14:49.590 "bdev_name": "Malloc0", 00:14:49.590 "no_auto_visible": false 00:14:49.590 }, 00:14:49.590 "method": "nvmf_subsystem_add_ns", 00:14:49.590 "req_id": 1 00:14:49.590 } 00:14:49.590 Got JSON-RPC error response 00:14:49.590 response: 00:14:49.590 { 00:14:49.590 "code": -32602, 00:14:49.590 "message": "Invalid parameters" 00:14:49.590 } 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:49.590 Adding namespace failed - expected result. 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:49.590 test case2: host connect to nvmf target in multiple paths 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.590 [2024-07-15 16:56:56.089077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.590 16:56:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:50.960 16:56:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:51.892 16:56:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:51.892 16:56:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:14:51.892 16:56:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.892 16:56:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:51.892 16:56:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:14:53.860 16:57:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:53.860 16:57:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:53.860 16:57:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:53.860 16:57:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:53.860 16:57:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.860 16:57:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:14:53.860 16:57:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:53.860 [global] 00:14:53.860 thread=1 00:14:53.860 invalidate=1 00:14:53.860 rw=write 00:14:53.860 time_based=1 00:14:53.860 runtime=1 00:14:53.860 ioengine=libaio 00:14:53.860 direct=1 00:14:53.860 bs=4096 00:14:53.860 iodepth=1 00:14:53.860 norandommap=0 00:14:53.860 numjobs=1 00:14:53.860 00:14:53.860 verify_dump=1 00:14:53.860 verify_backlog=512 00:14:53.860 verify_state_save=0 00:14:53.860 do_verify=1 00:14:53.860 verify=crc32c-intel 00:14:53.860 [job0] 00:14:53.860 filename=/dev/nvme0n1 00:14:53.860 Could not set queue depth (nvme0n1) 00:14:54.117 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:54.117 fio-3.35 00:14:54.117 Starting 1 thread 00:14:55.492 00:14:55.492 job0: (groupid=0, jobs=1): err= 0: pid=60447: Mon Jul 15 16:57:01 2024 00:14:55.492 read: IOPS=1495, BW=5980KiB/s (6124kB/s)(6016KiB/1006msec) 00:14:55.492 slat (nsec): min=7104, max=35481, avg=8077.42, stdev=1559.92 00:14:55.492 clat (usec): min=224, max=41874, avg=446.83, stdev=2779.67 00:14:55.492 lat (usec): min=232, max=41898, avg=454.90, stdev=2780.55 00:14:55.492 clat percentiles (usec): 00:14:55.492 | 1.00th=[ 235], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 245], 00:14:55.492 | 30.00th=[ 247], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:14:55.492 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 281], 00:14:55.492 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[41157], 99.95th=[41681], 00:14:55.492 | 99.99th=[41681] 00:14:55.492 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:14:55.492 slat (usec): min=10, max=23015, avg=26.87, stdev=586.96 00:14:55.492 clat (usec): min=144, max=359, avg=175.32, stdev=21.26 00:14:55.492 lat (usec): min=156, max=23373, avg=202.19, stdev=591.99 00:14:55.492 clat percentiles (usec): 00:14:55.492 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 159], 00:14:55.492 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 172], 00:14:55.492 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 208], 00:14:55.492 | 99.00th=[ 235], 99.50th=[ 243], 99.90th=[ 359], 99.95th=[ 359], 00:14:55.492 | 99.99th=[ 359] 00:14:55.492 bw ( KiB/s): min= 4048, max= 8223, per=100.00%, avg=6135.50, stdev=2952.17, samples=2 00:14:55.492 iops : min= 1012, max= 2055, avg=1533.50, stdev=737.51, samples=2 00:14:55.492 lat (usec) : 250=75.66%, 500=24.11% 00:14:55.492 lat (msec) : 50=0.23% 00:14:55.492 cpu : usr=2.79%, sys=4.58%, ctx=3044, majf=0, minf=2 00:14:55.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:55.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.492 issued rwts: total=1504,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.492 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:55.492 00:14:55.492 Run status group 0 (all jobs): 00:14:55.492 READ: bw=5980KiB/s (6124kB/s), 5980KiB/s-5980KiB/s (6124kB/s-6124kB/s), io=6016KiB (6160kB), run=1006-1006msec 00:14:55.492 WRITE: bw=6107KiB/s (6254kB/s), 6107KiB/s-6107KiB/s (6254kB/s-6254kB/s), io=6144KiB (6291kB), run=1006-1006msec 00:14:55.492 00:14:55.492 Disk stats (read/write): 00:14:55.492 nvme0n1: ios=1528/1536, merge=0/0, ticks=1521/251, in_queue=1772, util=98.30% 00:14:55.492 16:57:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:55.492 rmmod nvme_tcp 00:14:55.492 rmmod nvme_fabrics 00:14:55.492 rmmod nvme_keyring 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 59319 ']' 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 59319 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 59319 ']' 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 59319 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.492 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59319 00:14:55.751 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:55.751 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:55.751 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59319' 00:14:55.751 killing process with pid 59319 00:14:55.751 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 59319 00:14:55.751 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 59319 00:14:55.751 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:55.751 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:55.751 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:55.751 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.751 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.751 16:57:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.751 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.751 16:57:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.283 16:57:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:58.283 00:14:58.283 real 0m14.438s 00:14:58.283 user 0m34.949s 00:14:58.283 sys 0m4.553s 00:14:58.283 16:57:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.283 16:57:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:58.283 ************************************ 00:14:58.283 END TEST nvmf_nmic 00:14:58.283 ************************************ 00:14:58.283 16:57:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:58.283 16:57:04 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:58.283 16:57:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:58.283 16:57:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.283 16:57:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:58.283 ************************************ 00:14:58.283 START TEST nvmf_fio_target 00:14:58.283 ************************************ 00:14:58.283 16:57:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:58.283 * Looking for test storage... 00:14:58.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.283 16:57:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.283 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:58.283 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.283 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.283 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.283 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.283 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.283 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.283 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.283 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.283 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:58.284 16:57:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:03.543 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:03.543 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:03.543 Found net devices under 0000:86:00.0: cvl_0_0 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:03.543 Found net devices under 0000:86:00.1: cvl_0_1 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:03.543 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:03.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:15:03.801 00:15:03.801 --- 10.0.0.2 ping statistics --- 00:15:03.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.801 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:03.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:15:03.801 00:15:03.801 --- 10.0.0.1 ping statistics --- 00:15:03.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.801 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=64634 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 64634 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 64634 ']' 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.801 16:57:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.801 [2024-07-15 16:57:10.385915] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:15:03.801 [2024-07-15 16:57:10.385957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.801 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.801 [2024-07-15 16:57:10.443317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.060 [2024-07-15 16:57:10.524425] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.060 [2024-07-15 16:57:10.524463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.060 [2024-07-15 16:57:10.524471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.060 [2024-07-15 16:57:10.524477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.060 [2024-07-15 16:57:10.524482] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.060 [2024-07-15 16:57:10.524528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.060 [2024-07-15 16:57:10.524544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.060 [2024-07-15 16:57:10.524634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.060 [2024-07-15 16:57:10.524635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.628 16:57:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.628 16:57:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:04.628 16:57:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.628 16:57:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:04.628 16:57:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.628 16:57:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.628 16:57:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:04.887 [2024-07-15 16:57:11.391806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.887 16:57:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:05.145 16:57:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:05.145 16:57:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:05.404 16:57:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:05.404 16:57:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:05.404 16:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:05.404 16:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:05.663 16:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:05.663 16:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:05.921 16:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:05.921 16:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:06.178 16:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:06.178 16:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:06.178 16:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:06.434 16:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:06.434 16:57:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:06.691 16:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:06.691 16:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:06.691 16:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:06.948 16:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:06.948 16:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:07.205 16:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.205 [2024-07-15 16:57:13.857720] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.462 16:57:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:07.462 16:57:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:07.718 16:57:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:09.093 16:57:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:09.093 16:57:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:09.093 16:57:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.093 16:57:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:09.093 16:57:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:09.093 16:57:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:11.022 16:57:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:11.022 16:57:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:11.022 16:57:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.022 16:57:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:11.022 16:57:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.022 16:57:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:11.022 16:57:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:11.022 [global] 00:15:11.022 thread=1 00:15:11.022 invalidate=1 00:15:11.022 rw=write 00:15:11.022 time_based=1 00:15:11.022 runtime=1 00:15:11.022 ioengine=libaio 00:15:11.022 direct=1 00:15:11.022 bs=4096 00:15:11.022 iodepth=1 00:15:11.022 norandommap=0 00:15:11.022 numjobs=1 00:15:11.022 00:15:11.022 verify_dump=1 00:15:11.022 verify_backlog=512 00:15:11.022 verify_state_save=0 00:15:11.022 do_verify=1 00:15:11.022 verify=crc32c-intel 00:15:11.022 [job0] 00:15:11.022 filename=/dev/nvme0n1 00:15:11.022 [job1] 00:15:11.022 filename=/dev/nvme0n2 00:15:11.022 [job2] 00:15:11.022 filename=/dev/nvme0n3 00:15:11.022 [job3] 00:15:11.022 filename=/dev/nvme0n4 00:15:11.022 Could not set queue depth (nvme0n1) 00:15:11.022 Could not set queue depth (nvme0n2) 00:15:11.022 Could not set queue depth (nvme0n3) 00:15:11.022 Could not set queue depth (nvme0n4) 00:15:11.280 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:11.280 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:11.280 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:11.280 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:11.280 fio-3.35 00:15:11.280 Starting 4 threads 00:15:12.665 00:15:12.665 job0: (groupid=0, jobs=1): err= 0: pid=66019: Mon Jul 15 16:57:19 2024 00:15:12.665 read: IOPS=1724, BW=6897KiB/s (7063kB/s)(6904KiB/1001msec) 00:15:12.665 slat (nsec): min=6205, max=32105, avg=7291.73, stdev=1271.15 00:15:12.665 clat (usec): min=229, max=41239, avg=337.97, stdev=987.46 00:15:12.665 lat (usec): min=236, max=41247, avg=345.26, stdev=987.47 00:15:12.665 clat percentiles (usec): 00:15:12.665 | 1.00th=[ 241], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:15:12.665 | 30.00th=[ 273], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 310], 00:15:12.665 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 408], 95.00th=[ 506], 00:15:12.665 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 594], 99.95th=[41157], 00:15:12.665 | 99.99th=[41157] 00:15:12.665 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:12.665 slat (nsec): min=8701, max=75043, avg=11336.25, stdev=6912.81 00:15:12.665 clat (usec): min=122, max=424, avg=181.44, stdev=21.28 00:15:12.665 lat (usec): min=147, max=455, avg=192.78, stdev=23.65 00:15:12.665 clat percentiles (usec): 00:15:12.665 | 1.00th=[ 145], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:15:12.665 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:15:12.665 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 215], 00:15:12.665 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 363], 99.95th=[ 375], 00:15:12.665 | 99.99th=[ 424] 00:15:12.665 bw ( KiB/s): min= 8192, max= 8192, per=37.24%, avg=8192.00, stdev= 0.00, samples=1 00:15:12.665 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:12.665 lat (usec) : 250=55.83%, 500=41.26%, 750=2.89% 00:15:12.665 lat (msec) : 50=0.03% 00:15:12.665 cpu : usr=1.80%, sys=3.60%, ctx=3775, majf=0, minf=1 00:15:12.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.665 issued rwts: total=1726,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.665 job1: (groupid=0, jobs=1): err= 0: pid=66020: Mon Jul 15 16:57:19 2024 00:15:12.665 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:15:12.665 slat (nsec): min=9242, max=23403, avg=22056.64, stdev=2874.31 00:15:12.665 clat (usec): min=40867, max=41971, avg=41108.81, stdev=352.64 00:15:12.665 lat (usec): min=40890, max=41993, avg=41130.87, stdev=352.56 00:15:12.665 clat percentiles (usec): 00:15:12.665 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:12.665 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:12.665 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:15:12.665 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:12.665 | 99.99th=[42206] 00:15:12.665 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:15:12.665 slat (nsec): min=8703, max=32290, avg=10538.69, stdev=1518.84 00:15:12.665 clat (usec): min=151, max=252, avg=183.32, stdev=12.88 00:15:12.665 lat (usec): min=161, max=262, avg=193.86, stdev=13.07 00:15:12.665 clat percentiles (usec): 00:15:12.665 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:15:12.665 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:15:12.665 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 206], 00:15:12.665 | 99.00th=[ 223], 99.50th=[ 225], 99.90th=[ 253], 99.95th=[ 253], 00:15:12.665 | 99.99th=[ 253] 00:15:12.665 bw ( KiB/s): min= 4096, max= 4096, per=18.62%, avg=4096.00, stdev= 0.00, samples=1 00:15:12.665 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:12.665 lat (usec) : 250=95.69%, 500=0.19% 00:15:12.665 lat (msec) : 50=4.12% 00:15:12.666 cpu : usr=0.30%, sys=0.50%, ctx=535, majf=0, minf=2 00:15:12.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.666 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.666 job2: (groupid=0, jobs=1): err= 0: pid=66021: Mon Jul 15 16:57:19 2024 00:15:12.666 read: IOPS=778, BW=3113KiB/s (3188kB/s)(3188KiB/1024msec) 00:15:12.666 slat (nsec): min=6220, max=25857, avg=7430.72, stdev=2317.60 00:15:12.666 clat (usec): min=271, max=41341, avg=1020.14, stdev=5145.70 00:15:12.666 lat (usec): min=278, max=41348, avg=1027.57, stdev=5146.81 00:15:12.666 clat percentiles (usec): 00:15:12.666 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:15:12.666 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 330], 00:15:12.666 | 70.00th=[ 347], 80.00th=[ 453], 90.00th=[ 510], 95.00th=[ 519], 00:15:12.666 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:12.666 | 99.99th=[41157] 00:15:12.666 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:15:12.666 slat (nsec): min=9146, max=37748, avg=10229.08, stdev=1226.77 00:15:12.666 clat (usec): min=138, max=338, avg=184.82, stdev=18.96 00:15:12.666 lat (usec): min=148, max=376, avg=195.05, stdev=19.24 00:15:12.666 clat percentiles (usec): 00:15:12.666 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:15:12.666 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:15:12.666 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 217], 00:15:12.666 | 99.00th=[ 237], 99.50th=[ 243], 99.90th=[ 273], 99.95th=[ 338], 00:15:12.666 | 99.99th=[ 338] 00:15:12.666 bw ( KiB/s): min= 8192, max= 8192, per=37.24%, avg=8192.00, stdev= 0.00, samples=1 00:15:12.666 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:12.666 lat (usec) : 250=56.01%, 500=37.56%, 750=5.71% 00:15:12.666 lat (msec) : 50=0.71% 00:15:12.666 cpu : usr=1.17%, sys=1.37%, ctx=1823, majf=0, minf=1 00:15:12.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.666 issued rwts: total=797,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.666 job3: (groupid=0, jobs=1): err= 0: pid=66022: Mon Jul 15 16:57:19 2024 00:15:12.666 read: IOPS=1632, BW=6529KiB/s (6686kB/s)(6536KiB/1001msec) 00:15:12.666 slat (nsec): min=7151, max=57691, avg=8566.03, stdev=2206.29 00:15:12.666 clat (usec): min=248, max=3854, avg=336.70, stdev=93.39 00:15:12.666 lat (usec): min=256, max=3862, avg=345.27, stdev=93.45 00:15:12.666 clat percentiles (usec): 00:15:12.666 | 1.00th=[ 265], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 314], 00:15:12.666 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:15:12.666 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 420], 00:15:12.666 | 99.00th=[ 461], 99.50th=[ 482], 99.90th=[ 603], 99.95th=[ 3851], 00:15:12.666 | 99.99th=[ 3851] 00:15:12.666 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:12.666 slat (nsec): min=9881, max=72982, avg=11710.05, stdev=1982.53 00:15:12.666 clat (usec): min=145, max=317, avg=195.64, stdev=15.70 00:15:12.666 lat (usec): min=158, max=329, avg=207.35, stdev=15.78 00:15:12.666 clat percentiles (usec): 00:15:12.666 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:15:12.666 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:15:12.666 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 215], 95.00th=[ 221], 00:15:12.666 | 99.00th=[ 237], 99.50th=[ 255], 99.90th=[ 297], 99.95th=[ 306], 00:15:12.666 | 99.99th=[ 318] 00:15:12.666 bw ( KiB/s): min= 8192, max= 8192, per=37.24%, avg=8192.00, stdev= 0.00, samples=1 00:15:12.666 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:12.666 lat (usec) : 250=55.27%, 500=44.57%, 750=0.14% 00:15:12.666 lat (msec) : 4=0.03% 00:15:12.666 cpu : usr=2.50%, sys=6.50%, ctx=3682, majf=0, minf=1 00:15:12.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.666 issued rwts: total=1634,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.666 00:15:12.666 Run status group 0 (all jobs): 00:15:12.666 READ: bw=15.9MiB/s (16.7MB/s), 87.5KiB/s-6897KiB/s (89.6kB/s-7063kB/s), io=16.3MiB (17.1MB), run=1001-1024msec 00:15:12.666 WRITE: bw=21.5MiB/s (22.5MB/s), 2036KiB/s-8184KiB/s (2085kB/s-8380kB/s), io=22.0MiB (23.1MB), run=1001-1024msec 00:15:12.666 00:15:12.666 Disk stats (read/write): 00:15:12.666 nvme0n1: ios=1586/1553, merge=0/0, ticks=542/269, in_queue=811, util=86.87% 00:15:12.666 nvme0n2: ios=44/512, merge=0/0, ticks=1731/93, in_queue=1824, util=98.37% 00:15:12.666 nvme0n3: ios=819/1024, merge=0/0, ticks=1600/179, in_queue=1779, util=98.44% 00:15:12.666 nvme0n4: ios=1534/1536, merge=0/0, ticks=836/280, in_queue=1116, util=91.07% 00:15:12.666 16:57:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:12.666 [global] 00:15:12.666 thread=1 00:15:12.666 invalidate=1 00:15:12.666 rw=randwrite 00:15:12.666 time_based=1 00:15:12.666 runtime=1 00:15:12.666 ioengine=libaio 00:15:12.666 direct=1 00:15:12.666 bs=4096 00:15:12.666 iodepth=1 00:15:12.666 norandommap=0 00:15:12.666 numjobs=1 00:15:12.666 00:15:12.666 verify_dump=1 00:15:12.666 verify_backlog=512 00:15:12.666 verify_state_save=0 00:15:12.666 do_verify=1 00:15:12.666 verify=crc32c-intel 00:15:12.666 [job0] 00:15:12.666 filename=/dev/nvme0n1 00:15:12.666 [job1] 00:15:12.666 filename=/dev/nvme0n2 00:15:12.666 [job2] 00:15:12.666 filename=/dev/nvme0n3 00:15:12.666 [job3] 00:15:12.666 filename=/dev/nvme0n4 00:15:12.666 Could not set queue depth (nvme0n1) 00:15:12.666 Could not set queue depth (nvme0n2) 00:15:12.666 Could not set queue depth (nvme0n3) 00:15:12.666 Could not set queue depth (nvme0n4) 00:15:12.924 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:12.924 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:12.924 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:12.924 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:12.924 fio-3.35 00:15:12.924 Starting 4 threads 00:15:14.295 00:15:14.295 job0: (groupid=0, jobs=1): err= 0: pid=66395: Mon Jul 15 16:57:20 2024 00:15:14.295 read: IOPS=1027, BW=4109KiB/s (4208kB/s)(4204KiB/1023msec) 00:15:14.295 slat (nsec): min=6355, max=23267, avg=7474.79, stdev=1649.69 00:15:14.295 clat (usec): min=264, max=41377, avg=632.19, stdev=3487.12 00:15:14.295 lat (usec): min=271, max=41397, avg=639.67, stdev=3488.45 00:15:14.295 clat percentiles (usec): 00:15:14.295 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 297], 00:15:14.295 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 318], 00:15:14.295 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 392], 95.00th=[ 416], 00:15:14.295 | 99.00th=[ 750], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:14.295 | 99.99th=[41157] 00:15:14.295 write: IOPS=1501, BW=6006KiB/s (6150kB/s)(6144KiB/1023msec); 0 zone resets 00:15:14.295 slat (usec): min=9, max=39229, avg=36.00, stdev=1000.69 00:15:14.295 clat (usec): min=143, max=419, avg=188.24, stdev=20.00 00:15:14.295 lat (usec): min=154, max=39599, avg=224.24, stdev=1005.53 00:15:14.295 clat percentiles (usec): 00:15:14.295 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 174], 00:15:14.295 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:15:14.295 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 221], 00:15:14.295 | 99.00th=[ 233], 99.50th=[ 249], 99.90th=[ 371], 99.95th=[ 420], 00:15:14.295 | 99.99th=[ 420] 00:15:14.295 bw ( KiB/s): min= 4096, max= 8192, per=38.36%, avg=6144.00, stdev=2896.31, samples=2 00:15:14.295 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:15:14.295 lat (usec) : 250=59.14%, 500=40.12%, 750=0.35%, 1000=0.04% 00:15:14.295 lat (msec) : 4=0.04%, 50=0.31% 00:15:14.295 cpu : usr=1.08%, sys=2.54%, ctx=2590, majf=0, minf=2 00:15:14.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.295 issued rwts: total=1051,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.295 job1: (groupid=0, jobs=1): err= 0: pid=66396: Mon Jul 15 16:57:20 2024 00:15:14.295 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:15:14.295 slat (nsec): min=7276, max=25274, avg=17320.41, stdev=4644.88 00:15:14.295 clat (usec): min=40897, max=42951, avg=41278.40, stdev=543.74 00:15:14.295 lat (usec): min=40914, max=42976, avg=41295.72, stdev=543.79 00:15:14.295 clat percentiles (usec): 00:15:14.295 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:14.295 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:14.295 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:15:14.295 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:15:14.295 | 99.99th=[42730] 00:15:14.295 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:15:14.295 slat (nsec): min=10516, max=72600, avg=12666.72, stdev=3614.78 00:15:14.295 clat (usec): min=154, max=426, avg=197.82, stdev=18.94 00:15:14.295 lat (usec): min=176, max=463, avg=210.48, stdev=19.62 00:15:14.295 clat percentiles (usec): 00:15:14.295 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 186], 00:15:14.295 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 200], 00:15:14.295 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 227], 00:15:14.295 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 429], 99.95th=[ 429], 00:15:14.295 | 99.99th=[ 429] 00:15:14.295 bw ( KiB/s): min= 4096, max= 4096, per=25.58%, avg=4096.00, stdev= 0.00, samples=1 00:15:14.295 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:14.295 lat (usec) : 250=95.51%, 500=0.37% 00:15:14.295 lat (msec) : 50=4.12% 00:15:14.295 cpu : usr=0.20%, sys=1.18%, ctx=535, majf=0, minf=1 00:15:14.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.295 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.295 job2: (groupid=0, jobs=1): err= 0: pid=66397: Mon Jul 15 16:57:20 2024 00:15:14.295 read: IOPS=281, BW=1125KiB/s (1152kB/s)(1136KiB/1010msec) 00:15:14.295 slat (nsec): min=7518, max=24932, avg=9180.75, stdev=3253.45 00:15:14.295 clat (usec): min=295, max=42308, avg=3103.44, stdev=10227.17 00:15:14.295 lat (usec): min=303, max=42329, avg=3112.62, stdev=10229.58 00:15:14.295 clat percentiles (usec): 00:15:14.295 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 334], 00:15:14.295 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 383], 00:15:14.295 | 70.00th=[ 400], 80.00th=[ 424], 90.00th=[ 449], 95.00th=[41157], 00:15:14.295 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:14.295 | 99.99th=[42206] 00:15:14.295 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:15:14.295 slat (nsec): min=10193, max=39412, avg=11418.98, stdev=1641.33 00:15:14.295 clat (usec): min=162, max=418, avg=230.03, stdev=29.64 00:15:14.295 lat (usec): min=173, max=429, avg=241.45, stdev=29.69 00:15:14.295 clat percentiles (usec): 00:15:14.295 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 204], 00:15:14.295 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 241], 00:15:14.295 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:15:14.295 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 420], 99.95th=[ 420], 00:15:14.295 | 99.99th=[ 420] 00:15:14.295 bw ( KiB/s): min= 4096, max= 4096, per=25.58%, avg=4096.00, stdev= 0.00, samples=1 00:15:14.295 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:14.295 lat (usec) : 250=47.49%, 500=49.75%, 750=0.38% 00:15:14.295 lat (msec) : 50=2.39% 00:15:14.295 cpu : usr=0.79%, sys=0.50%, ctx=796, majf=0, minf=1 00:15:14.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.295 issued rwts: total=284,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.295 job3: (groupid=0, jobs=1): err= 0: pid=66398: Mon Jul 15 16:57:20 2024 00:15:14.296 read: IOPS=1011, BW=4047KiB/s (4144kB/s)(4128KiB/1020msec) 00:15:14.296 slat (nsec): min=2413, max=41652, avg=7715.21, stdev=2884.86 00:15:14.296 clat (usec): min=233, max=41004, avg=643.67, stdev=3562.34 00:15:14.296 lat (usec): min=236, max=41028, avg=651.39, stdev=3563.53 00:15:14.296 clat percentiles (usec): 00:15:14.296 | 1.00th=[ 247], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:15:14.296 | 30.00th=[ 293], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:15:14.296 | 70.00th=[ 343], 80.00th=[ 363], 90.00th=[ 400], 95.00th=[ 441], 00:15:14.296 | 99.00th=[ 619], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:14.296 | 99.99th=[41157] 00:15:14.296 write: IOPS=1505, BW=6024KiB/s (6168kB/s)(6144KiB/1020msec); 0 zone resets 00:15:14.296 slat (nsec): min=10688, max=43484, avg=12126.99, stdev=1929.57 00:15:14.296 clat (usec): min=155, max=380, avg=209.07, stdev=32.11 00:15:14.296 lat (usec): min=167, max=418, avg=221.19, stdev=32.31 00:15:14.296 clat percentiles (usec): 00:15:14.296 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 182], 00:15:14.296 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 208], 00:15:14.296 | 70.00th=[ 237], 80.00th=[ 239], 90.00th=[ 245], 95.00th=[ 260], 00:15:14.296 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 338], 99.95th=[ 383], 00:15:14.296 | 99.99th=[ 383] 00:15:14.296 bw ( KiB/s): min= 4096, max= 8192, per=38.36%, avg=6144.00, stdev=2896.31, samples=2 00:15:14.296 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:15:14.296 lat (usec) : 250=56.50%, 500=42.80%, 750=0.39% 00:15:14.296 lat (msec) : 50=0.31% 00:15:14.296 cpu : usr=2.36%, sys=3.73%, ctx=2569, majf=0, minf=1 00:15:14.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.296 issued rwts: total=1032,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.296 00:15:14.296 Run status group 0 (all jobs): 00:15:14.296 READ: bw=9341KiB/s (9565kB/s), 86.4KiB/s-4109KiB/s (88.5kB/s-4208kB/s), io=9556KiB (9785kB), run=1010-1023msec 00:15:14.296 WRITE: bw=15.6MiB/s (16.4MB/s), 2012KiB/s-6024KiB/s (2060kB/s-6168kB/s), io=16.0MiB (16.8MB), run=1010-1023msec 00:15:14.296 00:15:14.296 Disk stats (read/write): 00:15:14.296 nvme0n1: ios=1071/1536, merge=0/0, ticks=1408/277, in_queue=1685, util=97.70% 00:15:14.296 nvme0n2: ios=21/512, merge=0/0, ticks=873/94, in_queue=967, util=86.76% 00:15:14.296 nvme0n3: ios=279/512, merge=0/0, ticks=675/122, in_queue=797, util=87.65% 00:15:14.296 nvme0n4: ios=1082/1536, merge=0/0, ticks=1311/310, in_queue=1621, util=98.02% 00:15:14.296 16:57:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:14.296 [global] 00:15:14.296 thread=1 00:15:14.296 invalidate=1 00:15:14.296 rw=write 00:15:14.296 time_based=1 00:15:14.296 runtime=1 00:15:14.296 ioengine=libaio 00:15:14.296 direct=1 00:15:14.296 bs=4096 00:15:14.296 iodepth=128 00:15:14.296 norandommap=0 00:15:14.296 numjobs=1 00:15:14.296 00:15:14.296 verify_dump=1 00:15:14.296 verify_backlog=512 00:15:14.296 verify_state_save=0 00:15:14.296 do_verify=1 00:15:14.296 verify=crc32c-intel 00:15:14.296 [job0] 00:15:14.296 filename=/dev/nvme0n1 00:15:14.296 [job1] 00:15:14.296 filename=/dev/nvme0n2 00:15:14.296 [job2] 00:15:14.296 filename=/dev/nvme0n3 00:15:14.296 [job3] 00:15:14.296 filename=/dev/nvme0n4 00:15:14.296 Could not set queue depth (nvme0n1) 00:15:14.296 Could not set queue depth (nvme0n2) 00:15:14.296 Could not set queue depth (nvme0n3) 00:15:14.296 Could not set queue depth (nvme0n4) 00:15:14.552 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:14.552 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:14.552 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:14.552 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:14.552 fio-3.35 00:15:14.552 Starting 4 threads 00:15:15.941 00:15:15.941 job0: (groupid=0, jobs=1): err= 0: pid=66764: Mon Jul 15 16:57:22 2024 00:15:15.941 read: IOPS=5471, BW=21.4MiB/s (22.4MB/s)(21.4MiB/1002msec) 00:15:15.941 slat (nsec): min=1335, max=10653k, avg=84742.99, stdev=459021.22 00:15:15.941 clat (usec): min=648, max=31310, avg=10558.73, stdev=2007.28 00:15:15.941 lat (usec): min=2209, max=31317, avg=10643.47, stdev=2028.39 00:15:15.941 clat percentiles (usec): 00:15:15.941 | 1.00th=[ 5538], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9634], 00:15:15.941 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10683], 00:15:15.941 | 70.00th=[10945], 80.00th=[11469], 90.00th=[12256], 95.00th=[12911], 00:15:15.941 | 99.00th=[19792], 99.50th=[20841], 99.90th=[30540], 99.95th=[30540], 00:15:15.941 | 99.99th=[31327] 00:15:15.941 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:15:15.941 slat (nsec): min=2000, max=23102k, avg=90952.02, stdev=600144.11 00:15:15.941 clat (usec): min=5465, max=33910, avg=12098.92, stdev=5190.10 00:15:15.941 lat (usec): min=5476, max=33917, avg=12189.88, stdev=5212.19 00:15:15.941 clat percentiles (usec): 00:15:15.941 | 1.00th=[ 8225], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10028], 00:15:15.941 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:15:15.941 | 70.00th=[10683], 80.00th=[10945], 90.00th=[17433], 95.00th=[26346], 00:15:15.941 | 99.00th=[31851], 99.50th=[32375], 99.90th=[33162], 99.95th=[33162], 00:15:15.941 | 99.99th=[33817] 00:15:15.941 bw ( KiB/s): min=20480, max=24576, per=30.41%, avg=22528.00, stdev=2896.31, samples=2 00:15:15.941 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:15:15.941 lat (usec) : 750=0.01% 00:15:15.941 lat (msec) : 4=0.19%, 10=23.08%, 20=71.93%, 50=4.80% 00:15:15.941 cpu : usr=3.30%, sys=5.09%, ctx=610, majf=0, minf=1 00:15:15.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:15.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.941 issued rwts: total=5482,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.941 job1: (groupid=0, jobs=1): err= 0: pid=66765: Mon Jul 15 16:57:22 2024 00:15:15.941 read: IOPS=5627, BW=22.0MiB/s (23.0MB/s)(22.2MiB/1009msec) 00:15:15.941 slat (nsec): min=1291, max=11634k, avg=90350.93, stdev=660712.31 00:15:15.941 clat (usec): min=3599, max=22305, avg=11158.68, stdev=2363.91 00:15:15.941 lat (usec): min=3607, max=25131, avg=11249.04, stdev=2421.11 00:15:15.941 clat percentiles (usec): 00:15:15.941 | 1.00th=[ 4424], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[ 9896], 00:15:15.941 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:15:15.941 | 70.00th=[11600], 80.00th=[12780], 90.00th=[14615], 95.00th=[16319], 00:15:15.941 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18744], 99.95th=[19006], 00:15:15.941 | 99.99th=[22414] 00:15:15.941 write: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1009msec); 0 zone resets 00:15:15.941 slat (usec): min=2, max=15435, avg=72.70, stdev=497.52 00:15:15.941 clat (usec): min=560, max=34487, avg=10354.49, stdev=3752.23 00:15:15.941 lat (usec): min=589, max=34500, avg=10427.19, stdev=3789.26 00:15:15.941 clat percentiles (usec): 00:15:15.941 | 1.00th=[ 2802], 5.00th=[ 4752], 10.00th=[ 6194], 20.00th=[ 8586], 00:15:15.941 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[10421], 00:15:15.941 | 70.00th=[10552], 80.00th=[10945], 90.00th=[13435], 95.00th=[16581], 00:15:15.941 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[34341], 00:15:15.941 | 99.99th=[34341] 00:15:15.941 bw ( KiB/s): min=23408, max=25096, per=32.74%, avg=24252.00, stdev=1193.60, samples=2 00:15:15.941 iops : min= 5852, max= 6274, avg=6063.00, stdev=298.40, samples=2 00:15:15.941 lat (usec) : 750=0.03% 00:15:15.941 lat (msec) : 2=0.08%, 4=1.76%, 10=28.20%, 20=68.84%, 50=1.08% 00:15:15.941 cpu : usr=4.27%, sys=6.15%, ctx=673, majf=0, minf=1 00:15:15.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:15.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.941 issued rwts: total=5678,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.941 job2: (groupid=0, jobs=1): err= 0: pid=66766: Mon Jul 15 16:57:22 2024 00:15:15.941 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:15:15.941 slat (nsec): min=1353, max=40696k, avg=263865.22, stdev=2027873.43 00:15:15.941 clat (msec): min=9, max=115, avg=32.72, stdev=28.92 00:15:15.941 lat (msec): min=9, max=115, avg=32.99, stdev=29.07 00:15:15.941 clat percentiles (msec): 00:15:15.941 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 13], 00:15:15.941 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 22], 60.00th=[ 24], 00:15:15.941 | 70.00th=[ 35], 80.00th=[ 47], 90.00th=[ 78], 95.00th=[ 105], 00:15:15.941 | 99.00th=[ 115], 99.50th=[ 115], 99.90th=[ 115], 99.95th=[ 115], 00:15:15.941 | 99.99th=[ 115] 00:15:15.941 write: IOPS=2837, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1004msec); 0 zone resets 00:15:15.941 slat (usec): min=2, max=5937, avg=109.13, stdev=536.40 00:15:15.941 clat (usec): min=1295, max=61813, avg=15155.69, stdev=7139.36 00:15:15.941 lat (usec): min=4795, max=61823, avg=15264.82, stdev=7122.11 00:15:15.941 clat percentiles (usec): 00:15:15.941 | 1.00th=[ 5145], 5.00th=[10028], 10.00th=[11076], 20.00th=[11731], 00:15:15.941 | 30.00th=[12125], 40.00th=[12387], 50.00th=[13173], 60.00th=[15008], 00:15:15.941 | 70.00th=[16712], 80.00th=[16909], 90.00th=[17433], 95.00th=[22414], 00:15:15.941 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:15:15.941 | 99.99th=[61604] 00:15:15.941 bw ( KiB/s): min= 5384, max=16384, per=14.69%, avg=10884.00, stdev=7778.17, samples=2 00:15:15.941 iops : min= 1346, max= 4096, avg=2721.00, stdev=1944.54, samples=2 00:15:15.941 lat (msec) : 2=0.02%, 10=3.59%, 20=68.29%, 50=18.14%, 100=6.53% 00:15:15.941 lat (msec) : 250=3.44% 00:15:15.941 cpu : usr=2.49%, sys=2.99%, ctx=268, majf=0, minf=1 00:15:15.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:15.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.941 issued rwts: total=2560,2849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.941 job3: (groupid=0, jobs=1): err= 0: pid=66767: Mon Jul 15 16:57:22 2024 00:15:15.941 read: IOPS=3614, BW=14.1MiB/s (14.8MB/s)(14.3MiB/1011msec) 00:15:15.941 slat (nsec): min=1159, max=16550k, avg=116021.10, stdev=876232.93 00:15:15.941 clat (usec): min=4732, max=39868, avg=14787.11, stdev=5519.19 00:15:15.941 lat (usec): min=4738, max=39873, avg=14903.13, stdev=5574.74 00:15:15.941 clat percentiles (usec): 00:15:15.941 | 1.00th=[ 5473], 5.00th=[ 8717], 10.00th=[10814], 20.00th=[11338], 00:15:15.941 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13566], 60.00th=[13829], 00:15:15.941 | 70.00th=[14615], 80.00th=[16319], 90.00th=[22414], 95.00th=[27919], 00:15:15.941 | 99.00th=[35390], 99.50th=[35914], 99.90th=[40109], 99.95th=[40109], 00:15:15.941 | 99.99th=[40109] 00:15:15.941 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:15:15.941 slat (usec): min=2, max=17599, avg=124.36, stdev=844.72 00:15:15.941 clat (usec): min=2034, max=45631, avg=18119.88, stdev=9756.08 00:15:15.941 lat (usec): min=2036, max=45636, avg=18244.24, stdev=9832.22 00:15:15.941 clat percentiles (usec): 00:15:15.941 | 1.00th=[ 4293], 5.00th=[ 6980], 10.00th=[ 8848], 20.00th=[10290], 00:15:15.941 | 30.00th=[11076], 40.00th=[11994], 50.00th=[14615], 60.00th=[19268], 00:15:15.941 | 70.00th=[22938], 80.00th=[27132], 90.00th=[33424], 95.00th=[36439], 00:15:15.941 | 99.00th=[42730], 99.50th=[45351], 99.90th=[45876], 99.95th=[45876], 00:15:15.941 | 99.99th=[45876] 00:15:15.941 bw ( KiB/s): min=15936, max=16368, per=21.81%, avg=16152.00, stdev=305.47, samples=2 00:15:15.941 iops : min= 3984, max= 4092, avg=4038.00, stdev=76.37, samples=2 00:15:15.941 lat (msec) : 4=0.50%, 10=11.34%, 20=63.34%, 50=24.81% 00:15:15.941 cpu : usr=3.17%, sys=4.26%, ctx=328, majf=0, minf=1 00:15:15.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:15.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.941 issued rwts: total=3654,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.941 00:15:15.941 Run status group 0 (all jobs): 00:15:15.941 READ: bw=67.1MiB/s (70.4MB/s), 9.96MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=67.9MiB (71.2MB), run=1002-1011msec 00:15:15.941 WRITE: bw=72.3MiB/s (75.8MB/s), 11.1MiB/s-23.8MiB/s (11.6MB/s-24.9MB/s), io=73.1MiB (76.7MB), run=1002-1011msec 00:15:15.941 00:15:15.941 Disk stats (read/write): 00:15:15.941 nvme0n1: ios=4658/4713, merge=0/0, ticks=16343/18548, in_queue=34891, util=87.27% 00:15:15.941 nvme0n2: ios=4714/5120, merge=0/0, ticks=52497/52203, in_queue=104700, util=100.00% 00:15:15.941 nvme0n3: ios=2176/2560, merge=0/0, ticks=18160/8296, in_queue=26456, util=88.98% 00:15:15.941 nvme0n4: ios=3131/3508, merge=0/0, ticks=43160/54811, in_queue=97971, util=100.00% 00:15:15.941 16:57:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:15.941 [global] 00:15:15.941 thread=1 00:15:15.941 invalidate=1 00:15:15.941 rw=randwrite 00:15:15.941 time_based=1 00:15:15.941 runtime=1 00:15:15.941 ioengine=libaio 00:15:15.941 direct=1 00:15:15.941 bs=4096 00:15:15.941 iodepth=128 00:15:15.941 norandommap=0 00:15:15.941 numjobs=1 00:15:15.941 00:15:15.941 verify_dump=1 00:15:15.941 verify_backlog=512 00:15:15.941 verify_state_save=0 00:15:15.941 do_verify=1 00:15:15.941 verify=crc32c-intel 00:15:15.941 [job0] 00:15:15.941 filename=/dev/nvme0n1 00:15:15.941 [job1] 00:15:15.941 filename=/dev/nvme0n2 00:15:15.941 [job2] 00:15:15.941 filename=/dev/nvme0n3 00:15:15.941 [job3] 00:15:15.941 filename=/dev/nvme0n4 00:15:15.941 Could not set queue depth (nvme0n1) 00:15:15.941 Could not set queue depth (nvme0n2) 00:15:15.941 Could not set queue depth (nvme0n3) 00:15:15.941 Could not set queue depth (nvme0n4) 00:15:15.941 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:15.941 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:15.941 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:15.941 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:15.941 fio-3.35 00:15:15.941 Starting 4 threads 00:15:17.312 00:15:17.312 job0: (groupid=0, jobs=1): err= 0: pid=67146: Mon Jul 15 16:57:23 2024 00:15:17.312 read: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(16.4MiB/1048msec) 00:15:17.312 slat (nsec): min=975, max=44326k, avg=121463.12, stdev=961852.06 00:15:17.312 clat (usec): min=8067, max=83800, avg=15742.14, stdev=10876.63 00:15:17.312 lat (usec): min=8294, max=83813, avg=15863.61, stdev=10922.04 00:15:17.312 clat percentiles (usec): 00:15:17.312 | 1.00th=[ 8848], 5.00th=[10028], 10.00th=[10421], 20.00th=[11076], 00:15:17.312 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:15:17.312 | 70.00th=[13042], 80.00th=[13698], 90.00th=[25822], 95.00th=[47449], 00:15:17.312 | 99.00th=[61604], 99.50th=[77071], 99.90th=[83362], 99.95th=[83362], 00:15:17.312 | 99.99th=[83362] 00:15:17.312 write: IOPS=4396, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1048msec); 0 zone resets 00:15:17.312 slat (nsec): min=1742, max=6615.3k, avg=104010.19, stdev=485639.48 00:15:17.312 clat (usec): min=7287, max=97783, avg=14450.73, stdev=11797.07 00:15:17.312 lat (usec): min=7292, max=97792, avg=14554.74, stdev=11857.69 00:15:17.312 clat percentiles (usec): 00:15:17.312 | 1.00th=[ 8225], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10421], 00:15:17.312 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11600], 60.00th=[11994], 00:15:17.312 | 70.00th=[12125], 80.00th=[12649], 90.00th=[19006], 95.00th=[33817], 00:15:17.312 | 99.00th=[68682], 99.50th=[85459], 99.90th=[98042], 99.95th=[98042], 00:15:17.312 | 99.99th=[98042] 00:15:17.312 bw ( KiB/s): min=16128, max=20480, per=24.92%, avg=18304.00, stdev=3077.33, samples=2 00:15:17.312 iops : min= 4032, max= 5120, avg=4576.00, stdev=769.33, samples=2 00:15:17.312 lat (msec) : 10=8.38%, 20=80.07%, 50=7.22%, 100=4.34% 00:15:17.312 cpu : usr=1.72%, sys=3.25%, ctx=520, majf=0, minf=1 00:15:17.312 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:17.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.312 issued rwts: total=4192,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.312 job1: (groupid=0, jobs=1): err= 0: pid=67147: Mon Jul 15 16:57:23 2024 00:15:17.312 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:15:17.312 slat (nsec): min=1054, max=17295k, avg=99979.41, stdev=705821.24 00:15:17.312 clat (usec): min=5017, max=42612, avg=13355.03, stdev=5094.57 00:15:17.312 lat (usec): min=5029, max=42629, avg=13455.01, stdev=5134.56 00:15:17.312 clat percentiles (usec): 00:15:17.312 | 1.00th=[ 5866], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[10290], 00:15:17.312 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12125], 60.00th=[13042], 00:15:17.312 | 70.00th=[13566], 80.00th=[14615], 90.00th=[20055], 95.00th=[25035], 00:15:17.312 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:15:17.312 | 99.99th=[42730] 00:15:17.312 write: IOPS=5098, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:15:17.312 slat (nsec): min=1895, max=15533k, avg=97157.40, stdev=701864.44 00:15:17.312 clat (usec): min=1523, max=42768, avg=12720.61, stdev=4546.82 00:15:17.313 lat (usec): min=3441, max=42771, avg=12817.77, stdev=4578.10 00:15:17.313 clat percentiles (usec): 00:15:17.313 | 1.00th=[ 4817], 5.00th=[ 7308], 10.00th=[ 9372], 20.00th=[10159], 00:15:17.313 | 30.00th=[10290], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:15:17.313 | 70.00th=[12649], 80.00th=[13960], 90.00th=[20055], 95.00th=[22414], 00:15:17.313 | 99.00th=[31851], 99.50th=[32113], 99.90th=[32113], 99.95th=[32113], 00:15:17.313 | 99.99th=[42730] 00:15:17.313 bw ( KiB/s): min=19448, max=20480, per=27.18%, avg=19964.00, stdev=729.73, samples=2 00:15:17.313 iops : min= 4862, max= 5120, avg=4991.00, stdev=182.43, samples=2 00:15:17.313 lat (msec) : 2=0.01%, 4=0.14%, 10=17.53%, 20=72.48%, 50=9.84% 00:15:17.313 cpu : usr=2.89%, sys=4.78%, ctx=403, majf=0, minf=1 00:15:17.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:17.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.313 issued rwts: total=4608,5119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.313 job2: (groupid=0, jobs=1): err= 0: pid=67148: Mon Jul 15 16:57:23 2024 00:15:17.313 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:15:17.313 slat (nsec): min=1272, max=14062k, avg=125816.14, stdev=878412.44 00:15:17.313 clat (usec): min=3210, max=55101, avg=14862.55, stdev=6431.17 00:15:17.313 lat (usec): min=3770, max=55111, avg=14988.36, stdev=6490.56 00:15:17.313 clat percentiles (usec): 00:15:17.313 | 1.00th=[ 5407], 5.00th=[ 8717], 10.00th=[10945], 20.00th=[11338], 00:15:17.313 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:15:17.313 | 70.00th=[14484], 80.00th=[16319], 90.00th=[20055], 95.00th=[28967], 00:15:17.313 | 99.00th=[42730], 99.50th=[49021], 99.90th=[55313], 99.95th=[55313], 00:15:17.313 | 99.99th=[55313] 00:15:17.313 write: IOPS=4872, BW=19.0MiB/s (20.0MB/s)(19.2MiB/1008msec); 0 zone resets 00:15:17.313 slat (usec): min=2, max=10987, avg=77.82, stdev=420.64 00:15:17.313 clat (usec): min=1002, max=55052, avg=12101.37, stdev=5295.45 00:15:17.313 lat (usec): min=1015, max=55063, avg=12179.19, stdev=5323.13 00:15:17.313 clat percentiles (usec): 00:15:17.313 | 1.00th=[ 1663], 5.00th=[ 4359], 10.00th=[ 6652], 20.00th=[ 9110], 00:15:17.313 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[12125], 00:15:17.313 | 70.00th=[12911], 80.00th=[13566], 90.00th=[16909], 95.00th=[23200], 00:15:17.313 | 99.00th=[30016], 99.50th=[39584], 99.90th=[43779], 99.95th=[43779], 00:15:17.313 | 99.99th=[55313] 00:15:17.313 bw ( KiB/s): min=17800, max=20464, per=26.04%, avg=19132.00, stdev=1883.73, samples=2 00:15:17.313 iops : min= 4450, max= 5116, avg=4783.00, stdev=470.93, samples=2 00:15:17.313 lat (msec) : 2=0.61%, 4=1.64%, 10=13.13%, 20=76.47%, 50=7.91% 00:15:17.313 lat (msec) : 100=0.24% 00:15:17.313 cpu : usr=3.97%, sys=4.17%, ctx=603, majf=0, minf=1 00:15:17.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:17.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.313 issued rwts: total=4608,4911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.313 job3: (groupid=0, jobs=1): err= 0: pid=67149: Mon Jul 15 16:57:23 2024 00:15:17.313 read: IOPS=4301, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1007msec) 00:15:17.313 slat (nsec): min=1098, max=15728k, avg=115339.87, stdev=833260.64 00:15:17.313 clat (usec): min=4163, max=32282, avg=14176.21, stdev=3915.44 00:15:17.313 lat (usec): min=4168, max=32310, avg=14291.55, stdev=3971.93 00:15:17.313 clat percentiles (usec): 00:15:17.313 | 1.00th=[ 4883], 5.00th=[ 8160], 10.00th=[10421], 20.00th=[11600], 00:15:17.313 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13435], 60.00th=[13829], 00:15:17.313 | 70.00th=[16057], 80.00th=[16712], 90.00th=[19530], 95.00th=[21890], 00:15:17.313 | 99.00th=[25297], 99.50th=[27395], 99.90th=[28967], 99.95th=[28967], 00:15:17.313 | 99.99th=[32375] 00:15:17.313 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:15:17.313 slat (nsec): min=1880, max=10010k, avg=102689.96, stdev=464096.73 00:15:17.313 clat (usec): min=1564, max=52849, avg=14326.47, stdev=6820.76 00:15:17.313 lat (usec): min=1576, max=52856, avg=14429.16, stdev=6863.74 00:15:17.313 clat percentiles (usec): 00:15:17.313 | 1.00th=[ 3097], 5.00th=[ 6652], 10.00th=[10159], 20.00th=[11207], 00:15:17.313 | 30.00th=[11600], 40.00th=[11731], 50.00th=[12518], 60.00th=[13304], 00:15:17.313 | 70.00th=[13435], 80.00th=[17433], 90.00th=[21627], 95.00th=[27132], 00:15:17.313 | 99.00th=[46924], 99.50th=[49021], 99.90th=[52691], 99.95th=[52691], 00:15:17.313 | 99.99th=[52691] 00:15:17.313 bw ( KiB/s): min=17968, max=18896, per=25.09%, avg=18432.00, stdev=656.20, samples=2 00:15:17.313 iops : min= 4492, max= 4724, avg=4608.00, stdev=164.05, samples=2 00:15:17.313 lat (msec) : 2=0.12%, 4=1.05%, 10=7.68%, 20=78.77%, 50=12.20% 00:15:17.313 lat (msec) : 100=0.17% 00:15:17.313 cpu : usr=3.28%, sys=3.98%, ctx=616, majf=0, minf=1 00:15:17.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:17.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.313 issued rwts: total=4332,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.313 00:15:17.313 Run status group 0 (all jobs): 00:15:17.313 READ: bw=66.1MiB/s (69.3MB/s), 15.6MiB/s-17.9MiB/s (16.4MB/s-18.8MB/s), io=69.3MiB (72.7MB), run=1004-1048msec 00:15:17.313 WRITE: bw=71.7MiB/s (75.2MB/s), 17.2MiB/s-19.9MiB/s (18.0MB/s-20.9MB/s), io=75.2MiB (78.8MB), run=1004-1048msec 00:15:17.313 00:15:17.313 Disk stats (read/write): 00:15:17.313 nvme0n1: ios=3976/4096, merge=0/0, ticks=14170/11125, in_queue=25295, util=82.26% 00:15:17.313 nvme0n2: ios=3604/3810, merge=0/0, ticks=27146/23681, in_queue=50827, util=97.74% 00:15:17.313 nvme0n3: ios=3584/3975, merge=0/0, ticks=53679/45645, in_queue=99324, util=87.43% 00:15:17.313 nvme0n4: ios=3629/3591, merge=0/0, ticks=49152/48891, in_queue=98043, util=97.35% 00:15:17.313 16:57:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:17.313 16:57:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=67379 00:15:17.313 16:57:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:17.313 16:57:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:17.313 [global] 00:15:17.313 thread=1 00:15:17.313 invalidate=1 00:15:17.313 rw=read 00:15:17.313 time_based=1 00:15:17.313 runtime=10 00:15:17.313 ioengine=libaio 00:15:17.313 direct=1 00:15:17.313 bs=4096 00:15:17.313 iodepth=1 00:15:17.313 norandommap=1 00:15:17.313 numjobs=1 00:15:17.313 00:15:17.313 [job0] 00:15:17.313 filename=/dev/nvme0n1 00:15:17.313 [job1] 00:15:17.313 filename=/dev/nvme0n2 00:15:17.313 [job2] 00:15:17.313 filename=/dev/nvme0n3 00:15:17.313 [job3] 00:15:17.313 filename=/dev/nvme0n4 00:15:17.313 Could not set queue depth (nvme0n1) 00:15:17.313 Could not set queue depth (nvme0n2) 00:15:17.313 Could not set queue depth (nvme0n3) 00:15:17.313 Could not set queue depth (nvme0n4) 00:15:17.570 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:17.570 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:17.570 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:17.570 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:17.570 fio-3.35 00:15:17.570 Starting 4 threads 00:15:20.858 16:57:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:20.858 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=34009088, buflen=4096 00:15:20.858 fio: pid=67585, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:20.858 16:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:20.858 16:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:20.858 16:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:20.858 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=294912, buflen=4096 00:15:20.858 fio: pid=67579, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:20.858 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=28811264, buflen=4096 00:15:20.858 fio: pid=67549, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:20.858 16:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:20.858 16:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:21.116 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=41836544, buflen=4096 00:15:21.116 fio: pid=67562, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:21.116 16:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:21.116 16:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:21.116 00:15:21.116 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67549: Mon Jul 15 16:57:27 2024 00:15:21.116 read: IOPS=2263, BW=9053KiB/s (9270kB/s)(27.5MiB/3108msec) 00:15:21.116 slat (nsec): min=5726, max=79488, avg=7322.63, stdev=1718.35 00:15:21.116 clat (usec): min=234, max=42068, avg=429.69, stdev=2250.92 00:15:21.116 lat (usec): min=246, max=42075, avg=437.01, stdev=2251.54 00:15:21.116 clat percentiles (usec): 00:15:21.116 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:15:21.116 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 00:15:21.116 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 334], 95.00th=[ 355], 00:15:21.116 | 99.00th=[ 437], 99.50th=[ 523], 99.90th=[41157], 99.95th=[42206], 00:15:21.116 | 99.99th=[42206] 00:15:21.116 bw ( KiB/s): min= 106, max=13320, per=30.16%, avg=9375.00, stdev=5064.48, samples=6 00:15:21.116 iops : min= 26, max= 3330, avg=2343.67, stdev=1266.30, samples=6 00:15:21.116 lat (usec) : 250=0.24%, 500=99.16%, 750=0.23%, 1000=0.01% 00:15:21.116 lat (msec) : 4=0.01%, 20=0.03%, 50=0.30% 00:15:21.116 cpu : usr=0.64%, sys=2.00%, ctx=7037, majf=0, minf=1 00:15:21.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:21.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.116 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.116 issued rwts: total=7035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:21.116 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67562: Mon Jul 15 16:57:27 2024 00:15:21.116 read: IOPS=3098, BW=12.1MiB/s (12.7MB/s)(39.9MiB/3297msec) 00:15:21.116 slat (usec): min=5, max=28517, avg=17.17, stdev=431.63 00:15:21.116 clat (usec): min=229, max=7153, avg=301.82, stdev=85.71 00:15:21.116 lat (usec): min=236, max=28988, avg=319.00, stdev=443.27 00:15:21.116 clat percentiles (usec): 00:15:21.116 | 1.00th=[ 245], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 273], 00:15:21.116 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:15:21.116 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 334], 95.00th=[ 351], 00:15:21.116 | 99.00th=[ 424], 99.50th=[ 449], 99.90th=[ 545], 99.95th=[ 652], 00:15:21.116 | 99.99th=[ 4178] 00:15:21.116 bw ( KiB/s): min=11781, max=13296, per=40.71%, avg=12655.50, stdev=542.84, samples=6 00:15:21.116 iops : min= 2945, max= 3324, avg=3163.83, stdev=135.79, samples=6 00:15:21.116 lat (usec) : 250=2.05%, 500=97.66%, 750=0.24%, 1000=0.01% 00:15:21.116 lat (msec) : 2=0.01%, 10=0.02% 00:15:21.116 cpu : usr=0.79%, sys=2.79%, ctx=10222, majf=0, minf=1 00:15:21.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:21.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.116 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.116 issued rwts: total=10215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:21.116 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67579: Mon Jul 15 16:57:27 2024 00:15:21.116 read: IOPS=24, BW=97.9KiB/s (100kB/s)(288KiB/2943msec) 00:15:21.116 slat (nsec): min=8283, max=29613, avg=17463.40, stdev=6299.53 00:15:21.116 clat (usec): min=483, max=42056, avg=40563.34, stdev=4804.17 00:15:21.116 lat (usec): min=513, max=42066, avg=40580.73, stdev=4802.76 00:15:21.116 clat percentiles (usec): 00:15:21.116 | 1.00th=[ 486], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:21.116 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:21.116 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:15:21.116 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:21.116 | 99.99th=[42206] 00:15:21.116 bw ( KiB/s): min= 96, max= 104, per=0.31%, avg=97.60, stdev= 3.58, samples=5 00:15:21.116 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:15:21.116 lat (usec) : 500=1.37% 00:15:21.116 lat (msec) : 50=97.26% 00:15:21.116 cpu : usr=0.00%, sys=0.07%, ctx=76, majf=0, minf=1 00:15:21.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:21.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.116 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.116 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:21.116 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67585: Mon Jul 15 16:57:27 2024 00:15:21.116 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(32.4MiB/2706msec) 00:15:21.116 slat (nsec): min=6081, max=37592, avg=7259.86, stdev=1255.53 00:15:21.116 clat (usec): min=244, max=1118, avg=314.97, stdev=24.99 00:15:21.116 lat (usec): min=250, max=1125, avg=322.23, stdev=25.06 00:15:21.116 clat percentiles (usec): 00:15:21.116 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 297], 00:15:21.116 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:15:21.116 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 343], 00:15:21.116 | 99.00th=[ 363], 99.50th=[ 396], 99.90th=[ 469], 99.95th=[ 515], 00:15:21.116 | 99.99th=[ 1123] 00:15:21.116 bw ( KiB/s): min=11968, max=13272, per=39.56%, avg=12299.20, stdev=546.66, samples=5 00:15:21.116 iops : min= 2992, max= 3318, avg=3074.80, stdev=136.66, samples=5 00:15:21.116 lat (usec) : 250=0.06%, 500=99.86%, 750=0.06% 00:15:21.116 lat (msec) : 2=0.01% 00:15:21.116 cpu : usr=0.85%, sys=2.70%, ctx=8305, majf=0, minf=2 00:15:21.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:21.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.116 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.116 issued rwts: total=8304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:21.116 00:15:21.116 Run status group 0 (all jobs): 00:15:21.116 READ: bw=30.4MiB/s (31.8MB/s), 97.9KiB/s-12.1MiB/s (100kB/s-12.7MB/s), io=100MiB (105MB), run=2706-3297msec 00:15:21.116 00:15:21.116 Disk stats (read/write): 00:15:21.116 nvme0n1: ios=7035/0, merge=0/0, ticks=2994/0, in_queue=2994, util=95.38% 00:15:21.116 nvme0n2: ios=9795/0, merge=0/0, ticks=2932/0, in_queue=2932, util=94.59% 00:15:21.116 nvme0n3: ios=111/0, merge=0/0, ticks=3721/0, in_queue=3721, util=99.12% 00:15:21.116 nvme0n4: ios=8024/0, merge=0/0, ticks=2493/0, in_queue=2493, util=96.41% 00:15:21.373 16:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:21.373 16:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:21.373 16:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:21.373 16:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:21.629 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:21.629 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:21.885 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:21.885 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 67379 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:22.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:22.142 nvmf hotplug test: fio failed as expected 00:15:22.142 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:22.399 rmmod nvme_tcp 00:15:22.399 rmmod nvme_fabrics 00:15:22.399 rmmod nvme_keyring 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 64634 ']' 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 64634 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 64634 ']' 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 64634 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:22.399 16:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64634 00:15:22.399 16:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:22.399 16:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:22.399 16:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64634' 00:15:22.399 killing process with pid 64634 00:15:22.399 16:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 64634 00:15:22.399 16:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 64634 00:15:22.656 16:57:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:22.656 16:57:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:22.656 16:57:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:22.656 16:57:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:22.656 16:57:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:22.656 16:57:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.656 16:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.656 16:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.184 16:57:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:25.184 00:15:25.184 real 0m26.748s 00:15:25.184 user 1m46.493s 00:15:25.184 sys 0m8.301s 00:15:25.184 16:57:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:25.184 16:57:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.184 ************************************ 00:15:25.184 END TEST nvmf_fio_target 00:15:25.184 ************************************ 00:15:25.184 16:57:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:25.184 16:57:31 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:25.184 16:57:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:25.184 16:57:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.184 16:57:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:25.184 ************************************ 00:15:25.184 START TEST nvmf_bdevio 00:15:25.184 ************************************ 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:25.184 * Looking for test storage... 00:15:25.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:25.184 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:25.185 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:25.185 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.185 16:57:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.185 16:57:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.185 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:25.185 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:25.185 16:57:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:25.185 16:57:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:30.449 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:30.449 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:30.449 Found net devices under 0000:86:00.0: cvl_0_0 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:30.449 Found net devices under 0000:86:00.1: cvl_0_1 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:30.449 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:30.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:15:30.450 00:15:30.450 --- 10.0.0.2 ping statistics --- 00:15:30.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.450 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:15:30.450 00:15:30.450 --- 10.0.0.1 ping statistics --- 00:15:30.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.450 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=71757 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 71757 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 71757 ']' 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.450 16:57:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:30.450 [2024-07-15 16:57:36.488167] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:15:30.450 [2024-07-15 16:57:36.488211] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.450 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.450 [2024-07-15 16:57:36.544093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.450 [2024-07-15 16:57:36.624429] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.450 [2024-07-15 16:57:36.624462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.450 [2024-07-15 16:57:36.624469] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.450 [2024-07-15 16:57:36.624475] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.450 [2024-07-15 16:57:36.624480] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.450 [2024-07-15 16:57:36.624528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:30.450 [2024-07-15 16:57:36.624635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:30.450 [2024-07-15 16:57:36.624751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.450 [2024-07-15 16:57:36.624752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:30.708 [2024-07-15 16:57:37.329230] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:30.708 Malloc0 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:30.708 [2024-07-15 16:57:37.372523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.708 16:57:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.966 16:57:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:30.966 16:57:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:30.966 16:57:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:30.966 16:57:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:30.966 16:57:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:30.966 16:57:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:30.966 { 00:15:30.966 "params": { 00:15:30.966 "name": "Nvme$subsystem", 00:15:30.966 "trtype": "$TEST_TRANSPORT", 00:15:30.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.966 "adrfam": "ipv4", 00:15:30.966 "trsvcid": "$NVMF_PORT", 00:15:30.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.966 "hdgst": ${hdgst:-false}, 00:15:30.966 "ddgst": ${ddgst:-false} 00:15:30.966 }, 00:15:30.966 "method": "bdev_nvme_attach_controller" 00:15:30.966 } 00:15:30.966 EOF 00:15:30.966 )") 00:15:30.966 16:57:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:30.966 16:57:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:30.966 16:57:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:30.966 16:57:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:30.966 "params": { 00:15:30.966 "name": "Nvme1", 00:15:30.966 "trtype": "tcp", 00:15:30.966 "traddr": "10.0.0.2", 00:15:30.966 "adrfam": "ipv4", 00:15:30.966 "trsvcid": "4420", 00:15:30.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.966 "hdgst": false, 00:15:30.966 "ddgst": false 00:15:30.966 }, 00:15:30.966 "method": "bdev_nvme_attach_controller" 00:15:30.966 }' 00:15:30.966 [2024-07-15 16:57:37.419435] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:15:30.966 [2024-07-15 16:57:37.419482] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72005 ] 00:15:30.966 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.966 [2024-07-15 16:57:37.473931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:30.966 [2024-07-15 16:57:37.549803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.966 [2024-07-15 16:57:37.549820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.966 [2024-07-15 16:57:37.549822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.224 I/O targets: 00:15:31.224 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:31.224 00:15:31.224 00:15:31.224 CUnit - A unit testing framework for C - Version 2.1-3 00:15:31.224 http://cunit.sourceforge.net/ 00:15:31.224 00:15:31.224 00:15:31.224 Suite: bdevio tests on: Nvme1n1 00:15:31.224 Test: blockdev write read block ...passed 00:15:31.482 Test: blockdev write zeroes read block ...passed 00:15:31.482 Test: blockdev write zeroes read no split ...passed 00:15:31.482 Test: blockdev write zeroes read split ...passed 00:15:31.482 Test: blockdev write zeroes read split partial ...passed 00:15:31.482 Test: blockdev reset ...[2024-07-15 16:57:38.025998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:31.482 [2024-07-15 16:57:38.026059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a066d0 (9): Bad file descriptor 00:15:31.482 [2024-07-15 16:57:38.084713] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:31.482 passed 00:15:31.482 Test: blockdev write read 8 blocks ...passed 00:15:31.482 Test: blockdev write read size > 128k ...passed 00:15:31.482 Test: blockdev write read invalid size ...passed 00:15:31.482 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:31.482 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:31.482 Test: blockdev write read max offset ...passed 00:15:31.741 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:31.741 Test: blockdev writev readv 8 blocks ...passed 00:15:31.741 Test: blockdev writev readv 30 x 1block ...passed 00:15:31.741 Test: blockdev writev readv block ...passed 00:15:31.741 Test: blockdev writev readv size > 128k ...passed 00:15:31.741 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:31.741 Test: blockdev comparev and writev ...[2024-07-15 16:57:38.255493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.741 [2024-07-15 16:57:38.255522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:31.741 [2024-07-15 16:57:38.255536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.741 [2024-07-15 16:57:38.255544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:31.741 [2024-07-15 16:57:38.255806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.741 [2024-07-15 16:57:38.255817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:31.741 [2024-07-15 16:57:38.255829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.741 [2024-07-15 16:57:38.255836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:31.741 [2024-07-15 16:57:38.256099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.741 [2024-07-15 16:57:38.256110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:31.741 [2024-07-15 16:57:38.256125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.741 [2024-07-15 16:57:38.256133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:31.741 [2024-07-15 16:57:38.256391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.741 [2024-07-15 16:57:38.256403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:31.741 [2024-07-15 16:57:38.256415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.741 [2024-07-15 16:57:38.256422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:31.741 passed 00:15:31.741 Test: blockdev nvme passthru rw ...passed 00:15:31.741 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:57:38.338559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:31.741 [2024-07-15 16:57:38.338577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:31.741 [2024-07-15 16:57:38.338753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:31.741 [2024-07-15 16:57:38.338764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:31.741 [2024-07-15 16:57:38.338931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:31.741 [2024-07-15 16:57:38.338942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:31.741 [2024-07-15 16:57:38.339110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:31.741 [2024-07-15 16:57:38.339121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:31.741 passed 00:15:31.741 Test: blockdev nvme admin passthru ...passed 00:15:31.741 Test: blockdev copy ...passed 00:15:31.741 00:15:31.741 Run Summary: Type Total Ran Passed Failed Inactive 00:15:31.741 suites 1 1 n/a 0 0 00:15:31.741 tests 23 23 23 0 0 00:15:31.741 asserts 152 152 152 0 n/a 00:15:31.741 00:15:31.741 Elapsed time = 1.142 seconds 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:31.999 rmmod nvme_tcp 00:15:31.999 rmmod nvme_fabrics 00:15:31.999 rmmod nvme_keyring 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 71757 ']' 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 71757 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 71757 ']' 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 71757 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.999 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71757 00:15:32.259 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:32.259 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:32.259 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71757' 00:15:32.259 killing process with pid 71757 00:15:32.259 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 71757 00:15:32.259 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 71757 00:15:32.259 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:32.259 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:32.259 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:32.259 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.259 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:32.259 16:57:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.259 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.259 16:57:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.793 16:57:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:34.793 00:15:34.793 real 0m9.593s 00:15:34.793 user 0m12.590s 00:15:34.793 sys 0m4.341s 00:15:34.793 16:57:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:34.793 16:57:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:34.793 ************************************ 00:15:34.793 END TEST nvmf_bdevio 00:15:34.793 ************************************ 00:15:34.793 16:57:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:34.793 16:57:40 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:34.793 16:57:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:34.793 16:57:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.793 16:57:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:34.793 ************************************ 00:15:34.793 START TEST nvmf_auth_target 00:15:34.793 ************************************ 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:34.793 * Looking for test storage... 00:15:34.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:34.793 16:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:40.052 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:40.052 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:40.052 Found net devices under 0000:86:00.0: cvl_0_0 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:40.052 Found net devices under 0000:86:00.1: cvl_0_1 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:40.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:15:40.052 00:15:40.052 --- 10.0.0.2 ping statistics --- 00:15:40.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.052 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:15:40.052 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:40.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:15:40.052 00:15:40.052 --- 10.0.0.1 ping statistics --- 00:15:40.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.052 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=75536 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 75536 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 75536 ']' 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.053 16:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=75778 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bab0ff7dc8c395c1265157608e95ae15de9912c6cab7d332 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:40.618 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gUq 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bab0ff7dc8c395c1265157608e95ae15de9912c6cab7d332 0 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bab0ff7dc8c395c1265157608e95ae15de9912c6cab7d332 0 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bab0ff7dc8c395c1265157608e95ae15de9912c6cab7d332 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gUq 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gUq 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.gUq 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=eeadaf646f61b26d924b96f71bcf44c77229ffe6cfb5ba116fddf889123f27e2 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.S5W 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key eeadaf646f61b26d924b96f71bcf44c77229ffe6cfb5ba116fddf889123f27e2 3 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 eeadaf646f61b26d924b96f71bcf44c77229ffe6cfb5ba116fddf889123f27e2 3 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=eeadaf646f61b26d924b96f71bcf44c77229ffe6cfb5ba116fddf889123f27e2 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:40.619 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.S5W 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.S5W 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.S5W 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=516ea9f04353d9953c87b2a399560968 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.P3O 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 516ea9f04353d9953c87b2a399560968 1 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 516ea9f04353d9953c87b2a399560968 1 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=516ea9f04353d9953c87b2a399560968 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.P3O 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.P3O 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.P3O 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=943d9df331d66b1b5e214df0f1de5f01196d1d3bb402634b 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.SQP 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 943d9df331d66b1b5e214df0f1de5f01196d1d3bb402634b 2 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 943d9df331d66b1b5e214df0f1de5f01196d1d3bb402634b 2 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=943d9df331d66b1b5e214df0f1de5f01196d1d3bb402634b 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.SQP 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.SQP 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.SQP 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a52ce10a8ffef76dfc8ef0b435d2f655e11315953d3b456c 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FbJ 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a52ce10a8ffef76dfc8ef0b435d2f655e11315953d3b456c 2 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a52ce10a8ffef76dfc8ef0b435d2f655e11315953d3b456c 2 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a52ce10a8ffef76dfc8ef0b435d2f655e11315953d3b456c 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FbJ 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FbJ 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.FbJ 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=74f213720fbb7afb0987e84ef8e7f8f8 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.S0W 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 74f213720fbb7afb0987e84ef8e7f8f8 1 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 74f213720fbb7afb0987e84ef8e7f8f8 1 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=74f213720fbb7afb0987e84ef8e7f8f8 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:40.920 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:41.179 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.S0W 00:15:41.179 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.S0W 00:15:41.179 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.S0W 00:15:41.179 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9fbe884ad48d91c149bec262c3ffc1045885b3f6f1e00ed2b583486bf01b61c2 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.aVP 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9fbe884ad48d91c149bec262c3ffc1045885b3f6f1e00ed2b583486bf01b61c2 3 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9fbe884ad48d91c149bec262c3ffc1045885b3f6f1e00ed2b583486bf01b61c2 3 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9fbe884ad48d91c149bec262c3ffc1045885b3f6f1e00ed2b583486bf01b61c2 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.aVP 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.aVP 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.aVP 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 75536 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 75536 ']' 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 75778 /var/tmp/host.sock 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 75778 ']' 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:41.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.180 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.438 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.438 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:41.438 16:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:41.438 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.438 16:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.438 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.438 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:41.438 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gUq 00:15:41.438 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.438 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.438 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.438 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.gUq 00:15:41.438 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.gUq 00:15:41.695 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.S5W ]] 00:15:41.695 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.S5W 00:15:41.695 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.695 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.696 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.696 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.S5W 00:15:41.696 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.S5W 00:15:41.696 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:41.696 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.P3O 00:15:41.696 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.696 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.696 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.953 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.P3O 00:15:41.953 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.P3O 00:15:41.953 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.SQP ]] 00:15:41.953 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SQP 00:15:41.953 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.953 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.953 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.953 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SQP 00:15:41.953 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SQP 00:15:42.210 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:42.210 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FbJ 00:15:42.210 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.210 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.210 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.210 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.FbJ 00:15:42.210 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.FbJ 00:15:42.467 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.S0W ]] 00:15:42.467 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.S0W 00:15:42.467 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.467 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.467 16:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.468 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.S0W 00:15:42.468 16:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.S0W 00:15:42.468 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:42.468 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.aVP 00:15:42.468 16:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.468 16:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.468 16:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.468 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.aVP 00:15:42.468 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.aVP 00:15:42.725 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:42.725 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:42.725 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.725 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.725 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:42.725 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:42.982 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:42.982 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.982 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:42.982 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:42.982 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:42.982 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.982 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.982 16:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.982 16:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.982 16:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.982 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.982 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.239 00:15:43.239 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.239 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.239 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.239 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.240 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.240 16:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.240 16:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.240 16:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.240 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.240 { 00:15:43.240 "cntlid": 1, 00:15:43.240 "qid": 0, 00:15:43.240 "state": "enabled", 00:15:43.240 "thread": "nvmf_tgt_poll_group_000", 00:15:43.240 "listen_address": { 00:15:43.240 "trtype": "TCP", 00:15:43.240 "adrfam": "IPv4", 00:15:43.240 "traddr": "10.0.0.2", 00:15:43.240 "trsvcid": "4420" 00:15:43.240 }, 00:15:43.240 "peer_address": { 00:15:43.240 "trtype": "TCP", 00:15:43.240 "adrfam": "IPv4", 00:15:43.240 "traddr": "10.0.0.1", 00:15:43.240 "trsvcid": "38650" 00:15:43.240 }, 00:15:43.240 "auth": { 00:15:43.240 "state": "completed", 00:15:43.240 "digest": "sha256", 00:15:43.240 "dhgroup": "null" 00:15:43.240 } 00:15:43.240 } 00:15:43.240 ]' 00:15:43.240 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.498 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.498 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.498 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:43.498 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.498 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.498 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.498 16:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.755 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.320 16:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.577 00:15:44.577 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.577 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.577 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.834 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.834 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.834 16:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.834 16:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.834 16:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.834 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.834 { 00:15:44.834 "cntlid": 3, 00:15:44.834 "qid": 0, 00:15:44.834 "state": "enabled", 00:15:44.834 "thread": "nvmf_tgt_poll_group_000", 00:15:44.834 "listen_address": { 00:15:44.834 "trtype": "TCP", 00:15:44.834 "adrfam": "IPv4", 00:15:44.834 "traddr": "10.0.0.2", 00:15:44.834 "trsvcid": "4420" 00:15:44.834 }, 00:15:44.834 "peer_address": { 00:15:44.834 "trtype": "TCP", 00:15:44.834 "adrfam": "IPv4", 00:15:44.834 "traddr": "10.0.0.1", 00:15:44.834 "trsvcid": "38678" 00:15:44.835 }, 00:15:44.835 "auth": { 00:15:44.835 "state": "completed", 00:15:44.835 "digest": "sha256", 00:15:44.835 "dhgroup": "null" 00:15:44.835 } 00:15:44.835 } 00:15:44.835 ]' 00:15:44.835 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.835 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.835 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.835 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:44.835 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.835 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.835 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.835 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.092 16:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:15:45.667 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.667 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.667 16:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.667 16:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.667 16:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.667 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.667 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.667 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.924 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:45.924 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.924 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:45.924 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:45.924 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:45.924 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.924 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.924 16:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.924 16:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.924 16:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.924 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.924 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.182 00:15:46.182 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.182 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.182 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.182 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.182 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.182 16:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.182 16:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.439 16:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.439 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.439 { 00:15:46.439 "cntlid": 5, 00:15:46.439 "qid": 0, 00:15:46.439 "state": "enabled", 00:15:46.439 "thread": "nvmf_tgt_poll_group_000", 00:15:46.439 "listen_address": { 00:15:46.439 "trtype": "TCP", 00:15:46.439 "adrfam": "IPv4", 00:15:46.439 "traddr": "10.0.0.2", 00:15:46.439 "trsvcid": "4420" 00:15:46.439 }, 00:15:46.439 "peer_address": { 00:15:46.439 "trtype": "TCP", 00:15:46.439 "adrfam": "IPv4", 00:15:46.439 "traddr": "10.0.0.1", 00:15:46.439 "trsvcid": "38696" 00:15:46.439 }, 00:15:46.439 "auth": { 00:15:46.439 "state": "completed", 00:15:46.439 "digest": "sha256", 00:15:46.439 "dhgroup": "null" 00:15:46.439 } 00:15:46.439 } 00:15:46.439 ]' 00:15:46.439 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.439 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.439 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.439 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:46.439 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.439 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.439 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.439 16:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.696 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:47.262 16:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:47.518 00:15:47.518 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.518 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.518 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.774 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.774 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.774 16:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.774 16:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.774 16:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.774 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.774 { 00:15:47.774 "cntlid": 7, 00:15:47.775 "qid": 0, 00:15:47.775 "state": "enabled", 00:15:47.775 "thread": "nvmf_tgt_poll_group_000", 00:15:47.775 "listen_address": { 00:15:47.775 "trtype": "TCP", 00:15:47.775 "adrfam": "IPv4", 00:15:47.775 "traddr": "10.0.0.2", 00:15:47.775 "trsvcid": "4420" 00:15:47.775 }, 00:15:47.775 "peer_address": { 00:15:47.775 "trtype": "TCP", 00:15:47.775 "adrfam": "IPv4", 00:15:47.775 "traddr": "10.0.0.1", 00:15:47.775 "trsvcid": "38722" 00:15:47.775 }, 00:15:47.775 "auth": { 00:15:47.775 "state": "completed", 00:15:47.775 "digest": "sha256", 00:15:47.775 "dhgroup": "null" 00:15:47.775 } 00:15:47.775 } 00:15:47.775 ]' 00:15:47.775 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.775 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.775 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.775 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:47.775 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.041 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.041 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.041 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.041 16:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:15:48.609 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.609 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.609 16:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.609 16:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.609 16:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.609 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.609 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.609 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:48.609 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:48.867 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:48.867 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.867 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:48.867 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:48.867 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:48.867 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.867 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.867 16:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.867 16:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.867 16:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.867 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.867 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.124 00:15:49.124 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.124 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.124 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.381 { 00:15:49.381 "cntlid": 9, 00:15:49.381 "qid": 0, 00:15:49.381 "state": "enabled", 00:15:49.381 "thread": "nvmf_tgt_poll_group_000", 00:15:49.381 "listen_address": { 00:15:49.381 "trtype": "TCP", 00:15:49.381 "adrfam": "IPv4", 00:15:49.381 "traddr": "10.0.0.2", 00:15:49.381 "trsvcid": "4420" 00:15:49.381 }, 00:15:49.381 "peer_address": { 00:15:49.381 "trtype": "TCP", 00:15:49.381 "adrfam": "IPv4", 00:15:49.381 "traddr": "10.0.0.1", 00:15:49.381 "trsvcid": "38748" 00:15:49.381 }, 00:15:49.381 "auth": { 00:15:49.381 "state": "completed", 00:15:49.381 "digest": "sha256", 00:15:49.381 "dhgroup": "ffdhe2048" 00:15:49.381 } 00:15:49.381 } 00:15:49.381 ]' 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.381 16:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.639 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:15:50.204 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.204 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.204 16:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.204 16:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.204 16:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.204 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.204 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.204 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.462 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:15:50.462 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.462 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:50.462 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:50.462 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:50.462 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.462 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.462 16:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.462 16:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.462 16:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.462 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.462 16:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.462 00:15:50.719 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.719 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.719 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.719 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.720 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.720 16:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.720 16:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.720 16:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.720 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.720 { 00:15:50.720 "cntlid": 11, 00:15:50.720 "qid": 0, 00:15:50.720 "state": "enabled", 00:15:50.720 "thread": "nvmf_tgt_poll_group_000", 00:15:50.720 "listen_address": { 00:15:50.720 "trtype": "TCP", 00:15:50.720 "adrfam": "IPv4", 00:15:50.720 "traddr": "10.0.0.2", 00:15:50.720 "trsvcid": "4420" 00:15:50.720 }, 00:15:50.720 "peer_address": { 00:15:50.720 "trtype": "TCP", 00:15:50.720 "adrfam": "IPv4", 00:15:50.720 "traddr": "10.0.0.1", 00:15:50.720 "trsvcid": "39576" 00:15:50.720 }, 00:15:50.720 "auth": { 00:15:50.720 "state": "completed", 00:15:50.720 "digest": "sha256", 00:15:50.720 "dhgroup": "ffdhe2048" 00:15:50.720 } 00:15:50.720 } 00:15:50.720 ]' 00:15:50.720 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.720 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.720 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.977 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:50.977 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.977 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.977 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.977 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.977 16:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:15:51.544 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.544 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.544 16:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.544 16:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.544 16:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.544 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.544 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:51.544 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:51.800 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:15:51.800 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.800 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:51.800 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:51.800 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:51.800 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.800 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.800 16:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.800 16:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.800 16:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.800 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.800 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.057 00:15:52.057 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.057 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.057 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.313 { 00:15:52.313 "cntlid": 13, 00:15:52.313 "qid": 0, 00:15:52.313 "state": "enabled", 00:15:52.313 "thread": "nvmf_tgt_poll_group_000", 00:15:52.313 "listen_address": { 00:15:52.313 "trtype": "TCP", 00:15:52.313 "adrfam": "IPv4", 00:15:52.313 "traddr": "10.0.0.2", 00:15:52.313 "trsvcid": "4420" 00:15:52.313 }, 00:15:52.313 "peer_address": { 00:15:52.313 "trtype": "TCP", 00:15:52.313 "adrfam": "IPv4", 00:15:52.313 "traddr": "10.0.0.1", 00:15:52.313 "trsvcid": "39598" 00:15:52.313 }, 00:15:52.313 "auth": { 00:15:52.313 "state": "completed", 00:15:52.313 "digest": "sha256", 00:15:52.313 "dhgroup": "ffdhe2048" 00:15:52.313 } 00:15:52.313 } 00:15:52.313 ]' 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.313 16:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.569 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:15:53.133 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.133 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.133 16:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.133 16:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.133 16:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.133 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.133 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.133 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.390 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:15:53.390 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.390 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:53.390 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:53.390 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:53.390 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.390 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:53.390 16:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.390 16:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.390 16:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.391 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:53.391 16:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:53.391 00:15:53.650 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.650 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.650 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.650 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.650 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.650 16:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.650 16:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.650 16:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.650 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.650 { 00:15:53.650 "cntlid": 15, 00:15:53.650 "qid": 0, 00:15:53.650 "state": "enabled", 00:15:53.650 "thread": "nvmf_tgt_poll_group_000", 00:15:53.650 "listen_address": { 00:15:53.650 "trtype": "TCP", 00:15:53.650 "adrfam": "IPv4", 00:15:53.650 "traddr": "10.0.0.2", 00:15:53.650 "trsvcid": "4420" 00:15:53.650 }, 00:15:53.650 "peer_address": { 00:15:53.650 "trtype": "TCP", 00:15:53.650 "adrfam": "IPv4", 00:15:53.650 "traddr": "10.0.0.1", 00:15:53.650 "trsvcid": "39632" 00:15:53.650 }, 00:15:53.650 "auth": { 00:15:53.650 "state": "completed", 00:15:53.650 "digest": "sha256", 00:15:53.650 "dhgroup": "ffdhe2048" 00:15:53.650 } 00:15:53.650 } 00:15:53.650 ]' 00:15:53.650 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.650 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.650 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.908 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:53.908 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.908 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.908 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.908 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.166 16:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.732 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.991 00:15:54.991 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.991 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.991 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.249 { 00:15:55.249 "cntlid": 17, 00:15:55.249 "qid": 0, 00:15:55.249 "state": "enabled", 00:15:55.249 "thread": "nvmf_tgt_poll_group_000", 00:15:55.249 "listen_address": { 00:15:55.249 "trtype": "TCP", 00:15:55.249 "adrfam": "IPv4", 00:15:55.249 "traddr": "10.0.0.2", 00:15:55.249 "trsvcid": "4420" 00:15:55.249 }, 00:15:55.249 "peer_address": { 00:15:55.249 "trtype": "TCP", 00:15:55.249 "adrfam": "IPv4", 00:15:55.249 "traddr": "10.0.0.1", 00:15:55.249 "trsvcid": "39672" 00:15:55.249 }, 00:15:55.249 "auth": { 00:15:55.249 "state": "completed", 00:15:55.249 "digest": "sha256", 00:15:55.249 "dhgroup": "ffdhe3072" 00:15:55.249 } 00:15:55.249 } 00:15:55.249 ]' 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.249 16:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.508 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:15:56.075 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.075 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.075 16:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.075 16:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.075 16:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.075 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.075 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:56.075 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:56.333 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:15:56.333 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.333 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:56.333 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:56.333 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:56.333 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.333 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.333 16:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.333 16:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.333 16:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.333 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.333 16:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.592 00:15:56.592 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.592 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.592 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.848 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.848 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.848 16:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.849 16:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.849 16:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.849 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.849 { 00:15:56.849 "cntlid": 19, 00:15:56.849 "qid": 0, 00:15:56.849 "state": "enabled", 00:15:56.849 "thread": "nvmf_tgt_poll_group_000", 00:15:56.849 "listen_address": { 00:15:56.849 "trtype": "TCP", 00:15:56.849 "adrfam": "IPv4", 00:15:56.849 "traddr": "10.0.0.2", 00:15:56.849 "trsvcid": "4420" 00:15:56.849 }, 00:15:56.849 "peer_address": { 00:15:56.849 "trtype": "TCP", 00:15:56.849 "adrfam": "IPv4", 00:15:56.849 "traddr": "10.0.0.1", 00:15:56.849 "trsvcid": "39688" 00:15:56.849 }, 00:15:56.849 "auth": { 00:15:56.849 "state": "completed", 00:15:56.849 "digest": "sha256", 00:15:56.849 "dhgroup": "ffdhe3072" 00:15:56.849 } 00:15:56.849 } 00:15:56.849 ]' 00:15:56.849 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.849 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.849 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.849 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:56.849 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.849 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.849 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.849 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.105 16:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:15:57.688 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.689 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.946 00:15:57.946 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.946 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.946 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.205 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.205 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.205 16:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.205 16:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.205 16:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.205 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.205 { 00:15:58.205 "cntlid": 21, 00:15:58.205 "qid": 0, 00:15:58.205 "state": "enabled", 00:15:58.205 "thread": "nvmf_tgt_poll_group_000", 00:15:58.205 "listen_address": { 00:15:58.205 "trtype": "TCP", 00:15:58.205 "adrfam": "IPv4", 00:15:58.205 "traddr": "10.0.0.2", 00:15:58.205 "trsvcid": "4420" 00:15:58.205 }, 00:15:58.205 "peer_address": { 00:15:58.205 "trtype": "TCP", 00:15:58.205 "adrfam": "IPv4", 00:15:58.205 "traddr": "10.0.0.1", 00:15:58.205 "trsvcid": "39720" 00:15:58.205 }, 00:15:58.205 "auth": { 00:15:58.205 "state": "completed", 00:15:58.205 "digest": "sha256", 00:15:58.205 "dhgroup": "ffdhe3072" 00:15:58.205 } 00:15:58.205 } 00:15:58.205 ]' 00:15:58.205 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.205 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.205 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.463 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:58.463 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.463 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.463 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.463 16:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.463 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:15:59.030 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.030 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.030 16:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.030 16:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.288 16:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.546 00:15:59.546 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.546 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.546 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.805 { 00:15:59.805 "cntlid": 23, 00:15:59.805 "qid": 0, 00:15:59.805 "state": "enabled", 00:15:59.805 "thread": "nvmf_tgt_poll_group_000", 00:15:59.805 "listen_address": { 00:15:59.805 "trtype": "TCP", 00:15:59.805 "adrfam": "IPv4", 00:15:59.805 "traddr": "10.0.0.2", 00:15:59.805 "trsvcid": "4420" 00:15:59.805 }, 00:15:59.805 "peer_address": { 00:15:59.805 "trtype": "TCP", 00:15:59.805 "adrfam": "IPv4", 00:15:59.805 "traddr": "10.0.0.1", 00:15:59.805 "trsvcid": "39750" 00:15:59.805 }, 00:15:59.805 "auth": { 00:15:59.805 "state": "completed", 00:15:59.805 "digest": "sha256", 00:15:59.805 "dhgroup": "ffdhe3072" 00:15:59.805 } 00:15:59.805 } 00:15:59.805 ]' 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.805 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.063 16:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:16:00.629 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.629 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.629 16:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.629 16:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.629 16:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.629 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.629 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.629 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:00.629 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:00.887 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:00.887 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.887 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:00.887 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:00.887 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:00.887 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.887 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.887 16:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.887 16:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.887 16:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.887 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.887 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.145 00:16:01.145 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.145 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.145 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.403 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.403 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.403 16:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.403 16:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.403 16:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.403 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.403 { 00:16:01.403 "cntlid": 25, 00:16:01.403 "qid": 0, 00:16:01.403 "state": "enabled", 00:16:01.403 "thread": "nvmf_tgt_poll_group_000", 00:16:01.403 "listen_address": { 00:16:01.403 "trtype": "TCP", 00:16:01.403 "adrfam": "IPv4", 00:16:01.403 "traddr": "10.0.0.2", 00:16:01.403 "trsvcid": "4420" 00:16:01.403 }, 00:16:01.403 "peer_address": { 00:16:01.403 "trtype": "TCP", 00:16:01.403 "adrfam": "IPv4", 00:16:01.403 "traddr": "10.0.0.1", 00:16:01.403 "trsvcid": "44118" 00:16:01.403 }, 00:16:01.403 "auth": { 00:16:01.403 "state": "completed", 00:16:01.403 "digest": "sha256", 00:16:01.403 "dhgroup": "ffdhe4096" 00:16:01.403 } 00:16:01.403 } 00:16:01.403 ]' 00:16:01.403 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.403 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.403 16:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.403 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:01.403 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.403 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.403 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.403 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.662 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:16:02.228 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.228 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.228 16:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.228 16:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.228 16:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.228 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.228 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:02.228 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:02.486 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:02.486 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.486 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:02.486 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:02.486 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:02.486 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.486 16:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.486 16:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.486 16:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.486 16:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.486 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.486 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.744 00:16:02.744 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.744 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.744 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.002 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.002 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.002 16:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.002 16:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.002 16:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.002 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.003 { 00:16:03.003 "cntlid": 27, 00:16:03.003 "qid": 0, 00:16:03.003 "state": "enabled", 00:16:03.003 "thread": "nvmf_tgt_poll_group_000", 00:16:03.003 "listen_address": { 00:16:03.003 "trtype": "TCP", 00:16:03.003 "adrfam": "IPv4", 00:16:03.003 "traddr": "10.0.0.2", 00:16:03.003 "trsvcid": "4420" 00:16:03.003 }, 00:16:03.003 "peer_address": { 00:16:03.003 "trtype": "TCP", 00:16:03.003 "adrfam": "IPv4", 00:16:03.003 "traddr": "10.0.0.1", 00:16:03.003 "trsvcid": "44150" 00:16:03.003 }, 00:16:03.003 "auth": { 00:16:03.003 "state": "completed", 00:16:03.003 "digest": "sha256", 00:16:03.003 "dhgroup": "ffdhe4096" 00:16:03.003 } 00:16:03.003 } 00:16:03.003 ]' 00:16:03.003 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.003 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.003 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.003 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.003 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.003 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.003 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.003 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.261 16:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:16:03.828 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.828 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.828 16:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.828 16:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.828 16:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.828 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.828 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:03.828 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:04.086 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:04.086 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.086 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:04.086 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:04.086 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:04.086 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.086 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.086 16:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.086 16:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.086 16:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.086 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.086 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.344 00:16:04.344 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.344 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.344 16:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.602 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.602 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.602 16:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.602 16:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.602 16:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.602 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.602 { 00:16:04.602 "cntlid": 29, 00:16:04.602 "qid": 0, 00:16:04.602 "state": "enabled", 00:16:04.602 "thread": "nvmf_tgt_poll_group_000", 00:16:04.602 "listen_address": { 00:16:04.602 "trtype": "TCP", 00:16:04.602 "adrfam": "IPv4", 00:16:04.602 "traddr": "10.0.0.2", 00:16:04.602 "trsvcid": "4420" 00:16:04.602 }, 00:16:04.602 "peer_address": { 00:16:04.602 "trtype": "TCP", 00:16:04.602 "adrfam": "IPv4", 00:16:04.602 "traddr": "10.0.0.1", 00:16:04.602 "trsvcid": "44172" 00:16:04.602 }, 00:16:04.602 "auth": { 00:16:04.602 "state": "completed", 00:16:04.602 "digest": "sha256", 00:16:04.602 "dhgroup": "ffdhe4096" 00:16:04.602 } 00:16:04.602 } 00:16:04.602 ]' 00:16:04.603 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.603 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.603 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.603 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:04.603 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.603 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.603 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.603 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.860 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:16:05.427 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.427 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.427 16:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.427 16:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.427 16:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.427 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.427 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:05.427 16:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:05.427 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:05.427 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.427 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:05.427 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:05.428 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:05.428 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.428 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:05.428 16:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.428 16:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.428 16:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.428 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.428 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.686 00:16:05.686 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.686 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.686 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.944 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.945 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.945 16:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.945 16:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.945 16:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.945 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.945 { 00:16:05.945 "cntlid": 31, 00:16:05.945 "qid": 0, 00:16:05.945 "state": "enabled", 00:16:05.945 "thread": "nvmf_tgt_poll_group_000", 00:16:05.945 "listen_address": { 00:16:05.945 "trtype": "TCP", 00:16:05.945 "adrfam": "IPv4", 00:16:05.945 "traddr": "10.0.0.2", 00:16:05.945 "trsvcid": "4420" 00:16:05.945 }, 00:16:05.945 "peer_address": { 00:16:05.945 "trtype": "TCP", 00:16:05.945 "adrfam": "IPv4", 00:16:05.945 "traddr": "10.0.0.1", 00:16:05.945 "trsvcid": "44190" 00:16:05.945 }, 00:16:05.945 "auth": { 00:16:05.945 "state": "completed", 00:16:05.945 "digest": "sha256", 00:16:05.945 "dhgroup": "ffdhe4096" 00:16:05.945 } 00:16:05.945 } 00:16:05.945 ]' 00:16:05.945 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.945 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.945 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.945 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.945 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.203 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.203 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.203 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.203 16:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:16:06.768 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.768 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.768 16:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.768 16:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.026 16:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.026 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.026 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.026 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:07.026 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:07.026 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:07.026 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.026 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.026 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:07.026 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:07.026 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.026 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.026 16:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.027 16:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.027 16:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.027 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.027 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.284 00:16:07.542 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.542 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.542 16:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.542 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.542 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.542 16:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.542 16:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.542 16:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.542 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.542 { 00:16:07.542 "cntlid": 33, 00:16:07.542 "qid": 0, 00:16:07.542 "state": "enabled", 00:16:07.542 "thread": "nvmf_tgt_poll_group_000", 00:16:07.542 "listen_address": { 00:16:07.542 "trtype": "TCP", 00:16:07.542 "adrfam": "IPv4", 00:16:07.542 "traddr": "10.0.0.2", 00:16:07.542 "trsvcid": "4420" 00:16:07.542 }, 00:16:07.542 "peer_address": { 00:16:07.542 "trtype": "TCP", 00:16:07.542 "adrfam": "IPv4", 00:16:07.542 "traddr": "10.0.0.1", 00:16:07.542 "trsvcid": "44210" 00:16:07.542 }, 00:16:07.542 "auth": { 00:16:07.542 "state": "completed", 00:16:07.542 "digest": "sha256", 00:16:07.542 "dhgroup": "ffdhe6144" 00:16:07.542 } 00:16:07.542 } 00:16:07.542 ]' 00:16:07.542 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.542 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.542 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.800 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.800 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.800 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.800 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.800 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.800 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:16:08.365 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.365 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.365 16:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.365 16:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.365 16:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.365 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.365 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:08.365 16:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:08.661 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:08.662 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.662 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:08.662 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:08.662 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:08.662 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.662 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.662 16:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.662 16:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.662 16:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.662 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.662 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.920 00:16:08.920 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.920 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.920 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.179 { 00:16:09.179 "cntlid": 35, 00:16:09.179 "qid": 0, 00:16:09.179 "state": "enabled", 00:16:09.179 "thread": "nvmf_tgt_poll_group_000", 00:16:09.179 "listen_address": { 00:16:09.179 "trtype": "TCP", 00:16:09.179 "adrfam": "IPv4", 00:16:09.179 "traddr": "10.0.0.2", 00:16:09.179 "trsvcid": "4420" 00:16:09.179 }, 00:16:09.179 "peer_address": { 00:16:09.179 "trtype": "TCP", 00:16:09.179 "adrfam": "IPv4", 00:16:09.179 "traddr": "10.0.0.1", 00:16:09.179 "trsvcid": "44230" 00:16:09.179 }, 00:16:09.179 "auth": { 00:16:09.179 "state": "completed", 00:16:09.179 "digest": "sha256", 00:16:09.179 "dhgroup": "ffdhe6144" 00:16:09.179 } 00:16:09.179 } 00:16:09.179 ]' 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.179 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.437 16:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:16:10.003 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.003 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.003 16:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.003 16:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.003 16:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.003 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.003 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:10.003 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:10.262 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:10.262 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.262 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:10.262 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:10.262 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:10.262 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.262 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.262 16:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.262 16:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.262 16:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.262 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.262 16:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.557 00:16:10.557 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.557 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.557 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.830 { 00:16:10.830 "cntlid": 37, 00:16:10.830 "qid": 0, 00:16:10.830 "state": "enabled", 00:16:10.830 "thread": "nvmf_tgt_poll_group_000", 00:16:10.830 "listen_address": { 00:16:10.830 "trtype": "TCP", 00:16:10.830 "adrfam": "IPv4", 00:16:10.830 "traddr": "10.0.0.2", 00:16:10.830 "trsvcid": "4420" 00:16:10.830 }, 00:16:10.830 "peer_address": { 00:16:10.830 "trtype": "TCP", 00:16:10.830 "adrfam": "IPv4", 00:16:10.830 "traddr": "10.0.0.1", 00:16:10.830 "trsvcid": "54188" 00:16:10.830 }, 00:16:10.830 "auth": { 00:16:10.830 "state": "completed", 00:16:10.830 "digest": "sha256", 00:16:10.830 "dhgroup": "ffdhe6144" 00:16:10.830 } 00:16:10.830 } 00:16:10.830 ]' 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.830 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.088 16:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:16:11.652 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.653 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.653 16:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.653 16:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.653 16:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.653 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.653 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.653 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.910 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:11.911 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.911 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.911 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:11.911 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:11.911 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.911 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:11.911 16:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.911 16:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.911 16:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.911 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.911 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.168 00:16:12.168 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.168 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.168 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.426 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.426 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.426 16:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.426 16:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.426 16:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.426 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.426 { 00:16:12.426 "cntlid": 39, 00:16:12.426 "qid": 0, 00:16:12.426 "state": "enabled", 00:16:12.426 "thread": "nvmf_tgt_poll_group_000", 00:16:12.426 "listen_address": { 00:16:12.426 "trtype": "TCP", 00:16:12.426 "adrfam": "IPv4", 00:16:12.426 "traddr": "10.0.0.2", 00:16:12.426 "trsvcid": "4420" 00:16:12.426 }, 00:16:12.426 "peer_address": { 00:16:12.426 "trtype": "TCP", 00:16:12.426 "adrfam": "IPv4", 00:16:12.426 "traddr": "10.0.0.1", 00:16:12.426 "trsvcid": "54204" 00:16:12.426 }, 00:16:12.426 "auth": { 00:16:12.426 "state": "completed", 00:16:12.426 "digest": "sha256", 00:16:12.426 "dhgroup": "ffdhe6144" 00:16:12.426 } 00:16:12.426 } 00:16:12.426 ]' 00:16:12.426 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.426 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.426 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.426 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.426 16:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.426 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.426 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.426 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.684 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:16:13.248 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.248 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.248 16:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.249 16:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.249 16:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.249 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.249 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.249 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.249 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.506 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:13.506 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.506 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:13.506 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:13.506 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:13.506 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.506 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.506 16:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.506 16:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.506 16:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.506 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.506 16:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.763 00:16:14.020 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.020 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.020 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.020 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.020 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.020 16:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.020 16:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.020 16:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.020 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.020 { 00:16:14.020 "cntlid": 41, 00:16:14.020 "qid": 0, 00:16:14.020 "state": "enabled", 00:16:14.020 "thread": "nvmf_tgt_poll_group_000", 00:16:14.020 "listen_address": { 00:16:14.020 "trtype": "TCP", 00:16:14.020 "adrfam": "IPv4", 00:16:14.020 "traddr": "10.0.0.2", 00:16:14.020 "trsvcid": "4420" 00:16:14.020 }, 00:16:14.020 "peer_address": { 00:16:14.020 "trtype": "TCP", 00:16:14.020 "adrfam": "IPv4", 00:16:14.020 "traddr": "10.0.0.1", 00:16:14.020 "trsvcid": "54224" 00:16:14.020 }, 00:16:14.020 "auth": { 00:16:14.020 "state": "completed", 00:16:14.020 "digest": "sha256", 00:16:14.020 "dhgroup": "ffdhe8192" 00:16:14.020 } 00:16:14.020 } 00:16:14.020 ]' 00:16:14.020 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.020 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.020 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.278 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.278 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.278 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.278 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.278 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.278 16:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:16:14.843 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.843 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.843 16:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.843 16:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.100 16:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.665 00:16:15.665 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.665 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.665 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.923 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.923 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.923 16:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.923 16:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.923 16:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.923 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.923 { 00:16:15.923 "cntlid": 43, 00:16:15.923 "qid": 0, 00:16:15.923 "state": "enabled", 00:16:15.923 "thread": "nvmf_tgt_poll_group_000", 00:16:15.923 "listen_address": { 00:16:15.923 "trtype": "TCP", 00:16:15.923 "adrfam": "IPv4", 00:16:15.923 "traddr": "10.0.0.2", 00:16:15.923 "trsvcid": "4420" 00:16:15.923 }, 00:16:15.923 "peer_address": { 00:16:15.923 "trtype": "TCP", 00:16:15.923 "adrfam": "IPv4", 00:16:15.923 "traddr": "10.0.0.1", 00:16:15.923 "trsvcid": "54236" 00:16:15.923 }, 00:16:15.923 "auth": { 00:16:15.923 "state": "completed", 00:16:15.923 "digest": "sha256", 00:16:15.923 "dhgroup": "ffdhe8192" 00:16:15.923 } 00:16:15.923 } 00:16:15.923 ]' 00:16:15.923 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.923 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.923 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.923 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.923 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.923 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.923 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.924 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.181 16:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:16:16.744 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.744 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.744 16:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.744 16:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.744 16:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.744 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.744 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.744 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:17.002 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:17.002 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.002 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.002 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:17.002 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:17.002 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.002 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.002 16:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.002 16:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.002 16:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.002 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.002 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.259 00:16:17.518 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.518 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.518 16:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.518 16:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.518 16:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.518 16:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.518 16:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.518 16:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.518 16:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.518 { 00:16:17.518 "cntlid": 45, 00:16:17.518 "qid": 0, 00:16:17.518 "state": "enabled", 00:16:17.518 "thread": "nvmf_tgt_poll_group_000", 00:16:17.518 "listen_address": { 00:16:17.518 "trtype": "TCP", 00:16:17.518 "adrfam": "IPv4", 00:16:17.518 "traddr": "10.0.0.2", 00:16:17.518 "trsvcid": "4420" 00:16:17.518 }, 00:16:17.518 "peer_address": { 00:16:17.518 "trtype": "TCP", 00:16:17.518 "adrfam": "IPv4", 00:16:17.518 "traddr": "10.0.0.1", 00:16:17.518 "trsvcid": "54256" 00:16:17.518 }, 00:16:17.518 "auth": { 00:16:17.518 "state": "completed", 00:16:17.518 "digest": "sha256", 00:16:17.518 "dhgroup": "ffdhe8192" 00:16:17.518 } 00:16:17.518 } 00:16:17.518 ]' 00:16:17.518 16:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.518 16:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.518 16:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.776 16:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.776 16:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.776 16:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.776 16:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.776 16:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.033 16:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:16:18.599 16:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:18.599 16:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.600 16:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.600 16:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.600 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.600 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.165 00:16:19.165 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.165 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.165 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.423 { 00:16:19.423 "cntlid": 47, 00:16:19.423 "qid": 0, 00:16:19.423 "state": "enabled", 00:16:19.423 "thread": "nvmf_tgt_poll_group_000", 00:16:19.423 "listen_address": { 00:16:19.423 "trtype": "TCP", 00:16:19.423 "adrfam": "IPv4", 00:16:19.423 "traddr": "10.0.0.2", 00:16:19.423 "trsvcid": "4420" 00:16:19.423 }, 00:16:19.423 "peer_address": { 00:16:19.423 "trtype": "TCP", 00:16:19.423 "adrfam": "IPv4", 00:16:19.423 "traddr": "10.0.0.1", 00:16:19.423 "trsvcid": "54274" 00:16:19.423 }, 00:16:19.423 "auth": { 00:16:19.423 "state": "completed", 00:16:19.423 "digest": "sha256", 00:16:19.423 "dhgroup": "ffdhe8192" 00:16:19.423 } 00:16:19.423 } 00:16:19.423 ]' 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.423 16:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.681 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.246 16:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.504 00:16:20.504 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.504 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.504 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.761 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.762 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.762 16:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.762 16:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 16:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.762 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.762 { 00:16:20.762 "cntlid": 49, 00:16:20.762 "qid": 0, 00:16:20.762 "state": "enabled", 00:16:20.762 "thread": "nvmf_tgt_poll_group_000", 00:16:20.762 "listen_address": { 00:16:20.762 "trtype": "TCP", 00:16:20.762 "adrfam": "IPv4", 00:16:20.762 "traddr": "10.0.0.2", 00:16:20.762 "trsvcid": "4420" 00:16:20.762 }, 00:16:20.762 "peer_address": { 00:16:20.762 "trtype": "TCP", 00:16:20.762 "adrfam": "IPv4", 00:16:20.762 "traddr": "10.0.0.1", 00:16:20.762 "trsvcid": "35678" 00:16:20.762 }, 00:16:20.762 "auth": { 00:16:20.762 "state": "completed", 00:16:20.762 "digest": "sha384", 00:16:20.762 "dhgroup": "null" 00:16:20.762 } 00:16:20.762 } 00:16:20.762 ]' 00:16:20.762 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.762 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.762 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.762 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:20.762 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.019 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.019 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.019 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.019 16:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:16:21.585 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.585 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.585 16:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.585 16:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.585 16:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.585 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.585 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:21.585 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:21.842 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:21.842 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.843 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:21.843 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:21.843 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:21.843 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.843 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.843 16:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.843 16:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.843 16:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.843 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.843 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.100 00:16:22.100 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.100 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.100 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.358 { 00:16:22.358 "cntlid": 51, 00:16:22.358 "qid": 0, 00:16:22.358 "state": "enabled", 00:16:22.358 "thread": "nvmf_tgt_poll_group_000", 00:16:22.358 "listen_address": { 00:16:22.358 "trtype": "TCP", 00:16:22.358 "adrfam": "IPv4", 00:16:22.358 "traddr": "10.0.0.2", 00:16:22.358 "trsvcid": "4420" 00:16:22.358 }, 00:16:22.358 "peer_address": { 00:16:22.358 "trtype": "TCP", 00:16:22.358 "adrfam": "IPv4", 00:16:22.358 "traddr": "10.0.0.1", 00:16:22.358 "trsvcid": "35712" 00:16:22.358 }, 00:16:22.358 "auth": { 00:16:22.358 "state": "completed", 00:16:22.358 "digest": "sha384", 00:16:22.358 "dhgroup": "null" 00:16:22.358 } 00:16:22.358 } 00:16:22.358 ]' 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.358 16:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.616 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:16:23.180 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.180 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.180 16:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.180 16:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.180 16:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.180 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.180 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:23.181 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:23.438 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:23.438 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.438 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:23.438 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:23.438 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:23.438 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.439 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.439 16:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.439 16:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.439 16:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.439 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.439 16:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.696 00:16:23.696 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.696 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.696 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.696 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.696 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.696 16:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.696 16:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.696 16:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.696 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.696 { 00:16:23.696 "cntlid": 53, 00:16:23.696 "qid": 0, 00:16:23.696 "state": "enabled", 00:16:23.696 "thread": "nvmf_tgt_poll_group_000", 00:16:23.697 "listen_address": { 00:16:23.697 "trtype": "TCP", 00:16:23.697 "adrfam": "IPv4", 00:16:23.697 "traddr": "10.0.0.2", 00:16:23.697 "trsvcid": "4420" 00:16:23.697 }, 00:16:23.697 "peer_address": { 00:16:23.697 "trtype": "TCP", 00:16:23.697 "adrfam": "IPv4", 00:16:23.697 "traddr": "10.0.0.1", 00:16:23.697 "trsvcid": "35740" 00:16:23.697 }, 00:16:23.697 "auth": { 00:16:23.697 "state": "completed", 00:16:23.697 "digest": "sha384", 00:16:23.697 "dhgroup": "null" 00:16:23.697 } 00:16:23.697 } 00:16:23.697 ]' 00:16:23.697 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.954 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.954 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.954 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:23.954 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.954 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.954 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.954 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.211 16:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:24.776 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.034 00:16:25.034 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.034 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.034 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.304 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.304 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.304 16:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.304 16:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.304 16:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.304 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.304 { 00:16:25.304 "cntlid": 55, 00:16:25.304 "qid": 0, 00:16:25.304 "state": "enabled", 00:16:25.304 "thread": "nvmf_tgt_poll_group_000", 00:16:25.304 "listen_address": { 00:16:25.304 "trtype": "TCP", 00:16:25.304 "adrfam": "IPv4", 00:16:25.304 "traddr": "10.0.0.2", 00:16:25.304 "trsvcid": "4420" 00:16:25.304 }, 00:16:25.304 "peer_address": { 00:16:25.304 "trtype": "TCP", 00:16:25.304 "adrfam": "IPv4", 00:16:25.304 "traddr": "10.0.0.1", 00:16:25.304 "trsvcid": "35776" 00:16:25.304 }, 00:16:25.304 "auth": { 00:16:25.304 "state": "completed", 00:16:25.304 "digest": "sha384", 00:16:25.304 "dhgroup": "null" 00:16:25.304 } 00:16:25.304 } 00:16:25.304 ]' 00:16:25.304 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.304 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.304 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.304 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:25.304 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.622 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.622 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.622 16:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.622 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:16:26.187 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.187 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.187 16:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.187 16:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.187 16:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.187 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.187 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.187 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:26.187 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:26.445 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:26.445 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.445 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:26.445 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:26.445 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:26.445 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.445 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.445 16:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.445 16:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.445 16:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.445 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.445 16:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.702 00:16:26.702 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.702 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.702 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.702 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.702 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.702 16:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.702 16:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.702 16:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.702 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.702 { 00:16:26.702 "cntlid": 57, 00:16:26.702 "qid": 0, 00:16:26.702 "state": "enabled", 00:16:26.702 "thread": "nvmf_tgt_poll_group_000", 00:16:26.702 "listen_address": { 00:16:26.702 "trtype": "TCP", 00:16:26.702 "adrfam": "IPv4", 00:16:26.702 "traddr": "10.0.0.2", 00:16:26.702 "trsvcid": "4420" 00:16:26.702 }, 00:16:26.702 "peer_address": { 00:16:26.702 "trtype": "TCP", 00:16:26.702 "adrfam": "IPv4", 00:16:26.702 "traddr": "10.0.0.1", 00:16:26.702 "trsvcid": "35804" 00:16:26.702 }, 00:16:26.702 "auth": { 00:16:26.702 "state": "completed", 00:16:26.702 "digest": "sha384", 00:16:26.702 "dhgroup": "ffdhe2048" 00:16:26.702 } 00:16:26.702 } 00:16:26.702 ]' 00:16:26.702 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.960 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.960 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.960 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.960 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.960 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.960 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.960 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.216 16:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:16:27.782 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.782 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.782 16:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.782 16:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.782 16:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.782 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.782 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:27.782 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:27.782 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:27.782 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.782 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:27.782 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:27.782 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:28.039 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.039 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.039 16:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.039 16:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.039 16:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.039 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.039 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.039 00:16:28.039 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.039 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.039 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.297 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.297 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.297 16:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.297 16:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.297 16:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.297 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.297 { 00:16:28.297 "cntlid": 59, 00:16:28.297 "qid": 0, 00:16:28.297 "state": "enabled", 00:16:28.297 "thread": "nvmf_tgt_poll_group_000", 00:16:28.297 "listen_address": { 00:16:28.297 "trtype": "TCP", 00:16:28.297 "adrfam": "IPv4", 00:16:28.297 "traddr": "10.0.0.2", 00:16:28.297 "trsvcid": "4420" 00:16:28.297 }, 00:16:28.297 "peer_address": { 00:16:28.297 "trtype": "TCP", 00:16:28.297 "adrfam": "IPv4", 00:16:28.297 "traddr": "10.0.0.1", 00:16:28.297 "trsvcid": "35822" 00:16:28.297 }, 00:16:28.297 "auth": { 00:16:28.297 "state": "completed", 00:16:28.297 "digest": "sha384", 00:16:28.297 "dhgroup": "ffdhe2048" 00:16:28.297 } 00:16:28.297 } 00:16:28.297 ]' 00:16:28.297 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.297 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.297 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.297 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.297 16:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.554 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.554 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.554 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.554 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:16:29.118 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.374 16:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.374 16:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.374 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.374 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.631 00:16:29.631 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.631 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.631 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.888 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.888 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.888 16:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.888 16:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.888 16:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.888 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.888 { 00:16:29.888 "cntlid": 61, 00:16:29.888 "qid": 0, 00:16:29.888 "state": "enabled", 00:16:29.888 "thread": "nvmf_tgt_poll_group_000", 00:16:29.888 "listen_address": { 00:16:29.888 "trtype": "TCP", 00:16:29.888 "adrfam": "IPv4", 00:16:29.888 "traddr": "10.0.0.2", 00:16:29.888 "trsvcid": "4420" 00:16:29.888 }, 00:16:29.888 "peer_address": { 00:16:29.888 "trtype": "TCP", 00:16:29.888 "adrfam": "IPv4", 00:16:29.888 "traddr": "10.0.0.1", 00:16:29.888 "trsvcid": "35844" 00:16:29.888 }, 00:16:29.888 "auth": { 00:16:29.888 "state": "completed", 00:16:29.888 "digest": "sha384", 00:16:29.888 "dhgroup": "ffdhe2048" 00:16:29.888 } 00:16:29.888 } 00:16:29.888 ]' 00:16:29.888 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.888 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.888 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.888 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.888 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.145 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.145 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.145 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.146 16:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:16:30.711 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.711 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.711 16:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.711 16:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.711 16:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.711 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.711 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:30.711 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:30.969 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:30.969 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.969 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:30.969 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:30.969 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:30.969 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.969 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:30.969 16:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.969 16:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.969 16:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.969 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.969 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.227 00:16:31.227 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.227 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.227 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.485 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.485 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.485 16:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.485 16:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.485 16:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.485 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.485 { 00:16:31.485 "cntlid": 63, 00:16:31.485 "qid": 0, 00:16:31.485 "state": "enabled", 00:16:31.485 "thread": "nvmf_tgt_poll_group_000", 00:16:31.485 "listen_address": { 00:16:31.485 "trtype": "TCP", 00:16:31.485 "adrfam": "IPv4", 00:16:31.485 "traddr": "10.0.0.2", 00:16:31.485 "trsvcid": "4420" 00:16:31.485 }, 00:16:31.485 "peer_address": { 00:16:31.485 "trtype": "TCP", 00:16:31.485 "adrfam": "IPv4", 00:16:31.485 "traddr": "10.0.0.1", 00:16:31.485 "trsvcid": "41096" 00:16:31.485 }, 00:16:31.485 "auth": { 00:16:31.485 "state": "completed", 00:16:31.485 "digest": "sha384", 00:16:31.485 "dhgroup": "ffdhe2048" 00:16:31.485 } 00:16:31.485 } 00:16:31.485 ]' 00:16:31.485 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.485 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.485 16:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.485 16:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.485 16:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.485 16:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.485 16:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.485 16:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.742 16:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:16:32.307 16:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.307 16:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.307 16:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.307 16:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.307 16:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.307 16:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.307 16:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.307 16:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:32.307 16:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:32.565 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:32.565 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.565 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:32.565 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:32.565 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:32.565 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.565 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.565 16:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.565 16:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.565 16:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.565 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.565 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.822 00:16:32.822 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.822 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.822 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.822 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.822 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.822 16:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.822 16:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.822 16:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.822 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.822 { 00:16:32.822 "cntlid": 65, 00:16:32.822 "qid": 0, 00:16:32.822 "state": "enabled", 00:16:32.822 "thread": "nvmf_tgt_poll_group_000", 00:16:32.822 "listen_address": { 00:16:32.822 "trtype": "TCP", 00:16:32.822 "adrfam": "IPv4", 00:16:32.822 "traddr": "10.0.0.2", 00:16:32.822 "trsvcid": "4420" 00:16:32.822 }, 00:16:32.822 "peer_address": { 00:16:32.822 "trtype": "TCP", 00:16:32.822 "adrfam": "IPv4", 00:16:32.822 "traddr": "10.0.0.1", 00:16:32.822 "trsvcid": "41108" 00:16:32.822 }, 00:16:32.822 "auth": { 00:16:32.822 "state": "completed", 00:16:32.822 "digest": "sha384", 00:16:32.822 "dhgroup": "ffdhe3072" 00:16:32.822 } 00:16:32.822 } 00:16:32.822 ]' 00:16:32.822 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.078 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.078 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.078 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.078 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.078 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.078 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.078 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.334 16:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.902 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.160 00:16:34.160 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.160 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.160 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.418 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.418 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.418 16:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.418 16:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.418 16:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.418 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.418 { 00:16:34.418 "cntlid": 67, 00:16:34.418 "qid": 0, 00:16:34.418 "state": "enabled", 00:16:34.418 "thread": "nvmf_tgt_poll_group_000", 00:16:34.418 "listen_address": { 00:16:34.418 "trtype": "TCP", 00:16:34.418 "adrfam": "IPv4", 00:16:34.418 "traddr": "10.0.0.2", 00:16:34.418 "trsvcid": "4420" 00:16:34.418 }, 00:16:34.418 "peer_address": { 00:16:34.418 "trtype": "TCP", 00:16:34.418 "adrfam": "IPv4", 00:16:34.418 "traddr": "10.0.0.1", 00:16:34.418 "trsvcid": "41126" 00:16:34.418 }, 00:16:34.418 "auth": { 00:16:34.418 "state": "completed", 00:16:34.418 "digest": "sha384", 00:16:34.418 "dhgroup": "ffdhe3072" 00:16:34.418 } 00:16:34.418 } 00:16:34.418 ]' 00:16:34.418 16:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.418 16:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.418 16:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.418 16:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.418 16:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.677 16:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.677 16:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.677 16:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.677 16:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:16:35.243 16:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.243 16:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.243 16:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.243 16:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.243 16:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.243 16:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.243 16:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.243 16:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.500 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:35.500 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.501 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:35.501 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:35.501 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:35.501 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.501 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.501 16:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.501 16:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.501 16:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.501 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.501 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.758 00:16:35.758 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.758 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.758 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.016 { 00:16:36.016 "cntlid": 69, 00:16:36.016 "qid": 0, 00:16:36.016 "state": "enabled", 00:16:36.016 "thread": "nvmf_tgt_poll_group_000", 00:16:36.016 "listen_address": { 00:16:36.016 "trtype": "TCP", 00:16:36.016 "adrfam": "IPv4", 00:16:36.016 "traddr": "10.0.0.2", 00:16:36.016 "trsvcid": "4420" 00:16:36.016 }, 00:16:36.016 "peer_address": { 00:16:36.016 "trtype": "TCP", 00:16:36.016 "adrfam": "IPv4", 00:16:36.016 "traddr": "10.0.0.1", 00:16:36.016 "trsvcid": "41158" 00:16:36.016 }, 00:16:36.016 "auth": { 00:16:36.016 "state": "completed", 00:16:36.016 "digest": "sha384", 00:16:36.016 "dhgroup": "ffdhe3072" 00:16:36.016 } 00:16:36.016 } 00:16:36.016 ]' 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.016 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.273 16:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:16:36.839 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.839 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.839 16:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.839 16:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.839 16:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.839 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.839 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:36.839 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:37.097 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:16:37.097 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.097 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:37.097 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:37.097 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:37.097 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.097 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:37.097 16:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.097 16:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.097 16:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.097 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.097 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.355 00:16:37.355 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.355 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.355 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.355 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.355 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.355 16:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.355 16:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.355 16:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.355 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.355 { 00:16:37.355 "cntlid": 71, 00:16:37.355 "qid": 0, 00:16:37.355 "state": "enabled", 00:16:37.355 "thread": "nvmf_tgt_poll_group_000", 00:16:37.355 "listen_address": { 00:16:37.355 "trtype": "TCP", 00:16:37.355 "adrfam": "IPv4", 00:16:37.355 "traddr": "10.0.0.2", 00:16:37.355 "trsvcid": "4420" 00:16:37.355 }, 00:16:37.355 "peer_address": { 00:16:37.355 "trtype": "TCP", 00:16:37.355 "adrfam": "IPv4", 00:16:37.355 "traddr": "10.0.0.1", 00:16:37.355 "trsvcid": "41198" 00:16:37.355 }, 00:16:37.355 "auth": { 00:16:37.355 "state": "completed", 00:16:37.355 "digest": "sha384", 00:16:37.355 "dhgroup": "ffdhe3072" 00:16:37.355 } 00:16:37.355 } 00:16:37.355 ]' 00:16:37.355 16:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.613 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.613 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.613 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.613 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.613 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.613 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.613 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.871 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:16:38.438 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.438 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.438 16:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.438 16:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.438 16:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.438 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.438 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.438 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.438 16:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.438 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:16:38.438 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.438 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:38.438 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:38.438 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:38.438 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.438 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.438 16:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.438 16:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.438 16:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.438 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.438 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.695 00:16:38.696 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.696 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.696 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.954 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.954 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.954 16:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.954 16:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.954 16:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.954 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.954 { 00:16:38.954 "cntlid": 73, 00:16:38.954 "qid": 0, 00:16:38.954 "state": "enabled", 00:16:38.954 "thread": "nvmf_tgt_poll_group_000", 00:16:38.954 "listen_address": { 00:16:38.954 "trtype": "TCP", 00:16:38.954 "adrfam": "IPv4", 00:16:38.954 "traddr": "10.0.0.2", 00:16:38.954 "trsvcid": "4420" 00:16:38.954 }, 00:16:38.954 "peer_address": { 00:16:38.954 "trtype": "TCP", 00:16:38.954 "adrfam": "IPv4", 00:16:38.954 "traddr": "10.0.0.1", 00:16:38.954 "trsvcid": "41230" 00:16:38.954 }, 00:16:38.954 "auth": { 00:16:38.954 "state": "completed", 00:16:38.954 "digest": "sha384", 00:16:38.954 "dhgroup": "ffdhe4096" 00:16:38.954 } 00:16:38.954 } 00:16:38.954 ]' 00:16:38.954 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.954 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.954 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.214 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.214 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.214 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.214 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.214 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.214 16:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:16:39.817 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.817 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.817 16:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.817 16:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.818 16:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.818 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.818 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.818 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:40.076 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:40.076 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.076 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:40.076 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:40.076 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:40.076 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.076 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.076 16:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.076 16:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.076 16:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.076 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.076 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.334 00:16:40.334 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.334 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.334 16:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.592 { 00:16:40.592 "cntlid": 75, 00:16:40.592 "qid": 0, 00:16:40.592 "state": "enabled", 00:16:40.592 "thread": "nvmf_tgt_poll_group_000", 00:16:40.592 "listen_address": { 00:16:40.592 "trtype": "TCP", 00:16:40.592 "adrfam": "IPv4", 00:16:40.592 "traddr": "10.0.0.2", 00:16:40.592 "trsvcid": "4420" 00:16:40.592 }, 00:16:40.592 "peer_address": { 00:16:40.592 "trtype": "TCP", 00:16:40.592 "adrfam": "IPv4", 00:16:40.592 "traddr": "10.0.0.1", 00:16:40.592 "trsvcid": "56406" 00:16:40.592 }, 00:16:40.592 "auth": { 00:16:40.592 "state": "completed", 00:16:40.592 "digest": "sha384", 00:16:40.592 "dhgroup": "ffdhe4096" 00:16:40.592 } 00:16:40.592 } 00:16:40.592 ]' 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.592 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.850 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:16:41.414 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.414 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.414 16:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.414 16:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.414 16:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.414 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.414 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:41.414 16:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:41.671 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:41.671 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.671 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:41.671 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:41.671 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:41.671 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.671 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.671 16:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.671 16:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.671 16:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.671 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.671 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.929 00:16:41.929 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.929 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.929 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.186 { 00:16:42.186 "cntlid": 77, 00:16:42.186 "qid": 0, 00:16:42.186 "state": "enabled", 00:16:42.186 "thread": "nvmf_tgt_poll_group_000", 00:16:42.186 "listen_address": { 00:16:42.186 "trtype": "TCP", 00:16:42.186 "adrfam": "IPv4", 00:16:42.186 "traddr": "10.0.0.2", 00:16:42.186 "trsvcid": "4420" 00:16:42.186 }, 00:16:42.186 "peer_address": { 00:16:42.186 "trtype": "TCP", 00:16:42.186 "adrfam": "IPv4", 00:16:42.186 "traddr": "10.0.0.1", 00:16:42.186 "trsvcid": "56432" 00:16:42.186 }, 00:16:42.186 "auth": { 00:16:42.186 "state": "completed", 00:16:42.186 "digest": "sha384", 00:16:42.186 "dhgroup": "ffdhe4096" 00:16:42.186 } 00:16:42.186 } 00:16:42.186 ]' 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.186 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.443 16:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:16:43.007 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.007 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.007 16:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.007 16:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.007 16:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.007 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.007 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:43.007 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:43.264 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:43.264 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.264 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:43.264 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:43.264 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:43.264 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.264 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:43.264 16:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.264 16:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.264 16:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.264 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.264 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.522 00:16:43.522 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.522 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.522 16:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.522 16:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.522 16:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.522 16:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.522 16:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.522 16:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.522 16:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.522 { 00:16:43.522 "cntlid": 79, 00:16:43.522 "qid": 0, 00:16:43.522 "state": "enabled", 00:16:43.522 "thread": "nvmf_tgt_poll_group_000", 00:16:43.522 "listen_address": { 00:16:43.522 "trtype": "TCP", 00:16:43.522 "adrfam": "IPv4", 00:16:43.522 "traddr": "10.0.0.2", 00:16:43.522 "trsvcid": "4420" 00:16:43.522 }, 00:16:43.522 "peer_address": { 00:16:43.522 "trtype": "TCP", 00:16:43.522 "adrfam": "IPv4", 00:16:43.522 "traddr": "10.0.0.1", 00:16:43.522 "trsvcid": "56460" 00:16:43.522 }, 00:16:43.522 "auth": { 00:16:43.522 "state": "completed", 00:16:43.522 "digest": "sha384", 00:16:43.522 "dhgroup": "ffdhe4096" 00:16:43.522 } 00:16:43.522 } 00:16:43.522 ]' 00:16:43.522 16:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.779 16:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.780 16:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.780 16:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.780 16:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.780 16:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.780 16:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.780 16:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.036 16:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.599 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.161 00:16:45.161 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.161 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.161 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.161 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.161 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.161 16:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.161 16:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.161 16:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.161 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.161 { 00:16:45.161 "cntlid": 81, 00:16:45.161 "qid": 0, 00:16:45.161 "state": "enabled", 00:16:45.161 "thread": "nvmf_tgt_poll_group_000", 00:16:45.161 "listen_address": { 00:16:45.161 "trtype": "TCP", 00:16:45.161 "adrfam": "IPv4", 00:16:45.161 "traddr": "10.0.0.2", 00:16:45.161 "trsvcid": "4420" 00:16:45.161 }, 00:16:45.161 "peer_address": { 00:16:45.161 "trtype": "TCP", 00:16:45.161 "adrfam": "IPv4", 00:16:45.161 "traddr": "10.0.0.1", 00:16:45.161 "trsvcid": "56492" 00:16:45.161 }, 00:16:45.161 "auth": { 00:16:45.161 "state": "completed", 00:16:45.161 "digest": "sha384", 00:16:45.161 "dhgroup": "ffdhe6144" 00:16:45.161 } 00:16:45.161 } 00:16:45.161 ]' 00:16:45.161 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.418 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.418 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.418 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.418 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.418 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.418 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.418 16:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.674 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:16:46.237 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.237 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.237 16:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.237 16:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.237 16:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.237 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.237 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.237 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.494 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:46.494 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.494 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:46.494 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:46.494 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:46.494 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.494 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.494 16:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.494 16:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.494 16:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.494 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.494 16:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.751 00:16:46.751 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.751 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.751 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.009 { 00:16:47.009 "cntlid": 83, 00:16:47.009 "qid": 0, 00:16:47.009 "state": "enabled", 00:16:47.009 "thread": "nvmf_tgt_poll_group_000", 00:16:47.009 "listen_address": { 00:16:47.009 "trtype": "TCP", 00:16:47.009 "adrfam": "IPv4", 00:16:47.009 "traddr": "10.0.0.2", 00:16:47.009 "trsvcid": "4420" 00:16:47.009 }, 00:16:47.009 "peer_address": { 00:16:47.009 "trtype": "TCP", 00:16:47.009 "adrfam": "IPv4", 00:16:47.009 "traddr": "10.0.0.1", 00:16:47.009 "trsvcid": "56514" 00:16:47.009 }, 00:16:47.009 "auth": { 00:16:47.009 "state": "completed", 00:16:47.009 "digest": "sha384", 00:16:47.009 "dhgroup": "ffdhe6144" 00:16:47.009 } 00:16:47.009 } 00:16:47.009 ]' 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.009 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.267 16:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:16:47.832 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.832 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.832 16:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.832 16:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.832 16:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.832 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.832 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:47.832 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.091 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:48.091 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.091 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:48.091 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:48.091 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:48.091 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.091 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.091 16:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.091 16:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.091 16:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.091 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.091 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.349 00:16:48.349 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.349 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.349 16:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.606 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.606 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.606 16:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.606 16:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.606 16:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.606 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.606 { 00:16:48.606 "cntlid": 85, 00:16:48.606 "qid": 0, 00:16:48.606 "state": "enabled", 00:16:48.606 "thread": "nvmf_tgt_poll_group_000", 00:16:48.606 "listen_address": { 00:16:48.606 "trtype": "TCP", 00:16:48.606 "adrfam": "IPv4", 00:16:48.606 "traddr": "10.0.0.2", 00:16:48.606 "trsvcid": "4420" 00:16:48.606 }, 00:16:48.606 "peer_address": { 00:16:48.606 "trtype": "TCP", 00:16:48.606 "adrfam": "IPv4", 00:16:48.606 "traddr": "10.0.0.1", 00:16:48.606 "trsvcid": "56532" 00:16:48.606 }, 00:16:48.606 "auth": { 00:16:48.606 "state": "completed", 00:16:48.606 "digest": "sha384", 00:16:48.606 "dhgroup": "ffdhe6144" 00:16:48.606 } 00:16:48.606 } 00:16:48.606 ]' 00:16:48.606 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.606 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.606 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.606 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.606 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.607 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.607 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.607 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.864 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:16:49.430 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.430 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.430 16:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.430 16:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.430 16:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.430 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.430 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:49.430 16:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:49.688 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:49.688 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.688 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:49.688 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:49.688 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:49.688 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.688 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:49.688 16:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.688 16:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.688 16:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.688 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.688 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.945 00:16:49.945 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.945 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.946 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.203 { 00:16:50.203 "cntlid": 87, 00:16:50.203 "qid": 0, 00:16:50.203 "state": "enabled", 00:16:50.203 "thread": "nvmf_tgt_poll_group_000", 00:16:50.203 "listen_address": { 00:16:50.203 "trtype": "TCP", 00:16:50.203 "adrfam": "IPv4", 00:16:50.203 "traddr": "10.0.0.2", 00:16:50.203 "trsvcid": "4420" 00:16:50.203 }, 00:16:50.203 "peer_address": { 00:16:50.203 "trtype": "TCP", 00:16:50.203 "adrfam": "IPv4", 00:16:50.203 "traddr": "10.0.0.1", 00:16:50.203 "trsvcid": "56556" 00:16:50.203 }, 00:16:50.203 "auth": { 00:16:50.203 "state": "completed", 00:16:50.203 "digest": "sha384", 00:16:50.203 "dhgroup": "ffdhe6144" 00:16:50.203 } 00:16:50.203 } 00:16:50.203 ]' 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.203 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.461 16:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:16:51.027 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.027 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.027 16:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.027 16:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.027 16:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.027 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.027 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.027 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:51.027 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:51.285 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:51.285 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.285 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:51.285 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:51.285 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:51.285 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.285 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.285 16:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.285 16:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.285 16:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.285 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.285 16:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.543 00:16:51.801 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.801 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.801 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.801 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.801 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.801 16:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.801 16:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.801 16:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.801 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.801 { 00:16:51.801 "cntlid": 89, 00:16:51.801 "qid": 0, 00:16:51.801 "state": "enabled", 00:16:51.801 "thread": "nvmf_tgt_poll_group_000", 00:16:51.801 "listen_address": { 00:16:51.801 "trtype": "TCP", 00:16:51.801 "adrfam": "IPv4", 00:16:51.801 "traddr": "10.0.0.2", 00:16:51.801 "trsvcid": "4420" 00:16:51.801 }, 00:16:51.801 "peer_address": { 00:16:51.801 "trtype": "TCP", 00:16:51.801 "adrfam": "IPv4", 00:16:51.801 "traddr": "10.0.0.1", 00:16:51.801 "trsvcid": "40970" 00:16:51.801 }, 00:16:51.801 "auth": { 00:16:51.801 "state": "completed", 00:16:51.801 "digest": "sha384", 00:16:51.801 "dhgroup": "ffdhe8192" 00:16:51.801 } 00:16:51.801 } 00:16:51.801 ]' 00:16:51.801 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.801 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.801 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.059 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.059 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.059 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.059 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.059 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.059 16:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:16:52.624 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.624 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.624 16:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.624 16:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.882 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.447 00:16:53.447 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.447 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.447 16:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.740 { 00:16:53.740 "cntlid": 91, 00:16:53.740 "qid": 0, 00:16:53.740 "state": "enabled", 00:16:53.740 "thread": "nvmf_tgt_poll_group_000", 00:16:53.740 "listen_address": { 00:16:53.740 "trtype": "TCP", 00:16:53.740 "adrfam": "IPv4", 00:16:53.740 "traddr": "10.0.0.2", 00:16:53.740 "trsvcid": "4420" 00:16:53.740 }, 00:16:53.740 "peer_address": { 00:16:53.740 "trtype": "TCP", 00:16:53.740 "adrfam": "IPv4", 00:16:53.740 "traddr": "10.0.0.1", 00:16:53.740 "trsvcid": "41000" 00:16:53.740 }, 00:16:53.740 "auth": { 00:16:53.740 "state": "completed", 00:16:53.740 "digest": "sha384", 00:16:53.740 "dhgroup": "ffdhe8192" 00:16:53.740 } 00:16:53.740 } 00:16:53.740 ]' 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.740 16:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.028 16:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.592 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.156 00:16:55.156 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.156 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.156 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.414 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.414 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.414 16:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.414 16:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.414 16:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.414 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.414 { 00:16:55.414 "cntlid": 93, 00:16:55.414 "qid": 0, 00:16:55.414 "state": "enabled", 00:16:55.414 "thread": "nvmf_tgt_poll_group_000", 00:16:55.414 "listen_address": { 00:16:55.414 "trtype": "TCP", 00:16:55.414 "adrfam": "IPv4", 00:16:55.414 "traddr": "10.0.0.2", 00:16:55.414 "trsvcid": "4420" 00:16:55.414 }, 00:16:55.414 "peer_address": { 00:16:55.414 "trtype": "TCP", 00:16:55.414 "adrfam": "IPv4", 00:16:55.414 "traddr": "10.0.0.1", 00:16:55.414 "trsvcid": "41018" 00:16:55.414 }, 00:16:55.414 "auth": { 00:16:55.414 "state": "completed", 00:16:55.414 "digest": "sha384", 00:16:55.414 "dhgroup": "ffdhe8192" 00:16:55.414 } 00:16:55.414 } 00:16:55.414 ]' 00:16:55.414 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.414 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.414 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.414 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.414 16:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.414 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.414 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.414 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.672 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:16:56.237 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.237 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.237 16:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.237 16:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.237 16:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.237 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.237 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:56.237 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:56.494 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:56.494 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.494 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:56.494 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:56.494 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:56.494 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.494 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:56.494 16:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.494 16:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.494 16:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.494 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:56.494 16:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.074 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.074 { 00:16:57.074 "cntlid": 95, 00:16:57.074 "qid": 0, 00:16:57.074 "state": "enabled", 00:16:57.074 "thread": "nvmf_tgt_poll_group_000", 00:16:57.074 "listen_address": { 00:16:57.074 "trtype": "TCP", 00:16:57.074 "adrfam": "IPv4", 00:16:57.074 "traddr": "10.0.0.2", 00:16:57.074 "trsvcid": "4420" 00:16:57.074 }, 00:16:57.074 "peer_address": { 00:16:57.074 "trtype": "TCP", 00:16:57.074 "adrfam": "IPv4", 00:16:57.074 "traddr": "10.0.0.1", 00:16:57.074 "trsvcid": "41060" 00:16:57.074 }, 00:16:57.074 "auth": { 00:16:57.074 "state": "completed", 00:16:57.074 "digest": "sha384", 00:16:57.074 "dhgroup": "ffdhe8192" 00:16:57.074 } 00:16:57.074 } 00:16:57.074 ]' 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.074 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.331 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.331 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.331 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.331 16:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:16:57.896 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.896 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.896 16:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.896 16:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.896 16:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.896 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:57.896 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.896 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.896 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.896 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:58.153 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:58.153 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.153 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:58.153 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:58.153 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:58.153 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.153 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.153 16:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.153 16:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.153 16:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.153 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.153 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.411 00:16:58.411 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.411 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.411 16:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.668 { 00:16:58.668 "cntlid": 97, 00:16:58.668 "qid": 0, 00:16:58.668 "state": "enabled", 00:16:58.668 "thread": "nvmf_tgt_poll_group_000", 00:16:58.668 "listen_address": { 00:16:58.668 "trtype": "TCP", 00:16:58.668 "adrfam": "IPv4", 00:16:58.668 "traddr": "10.0.0.2", 00:16:58.668 "trsvcid": "4420" 00:16:58.668 }, 00:16:58.668 "peer_address": { 00:16:58.668 "trtype": "TCP", 00:16:58.668 "adrfam": "IPv4", 00:16:58.668 "traddr": "10.0.0.1", 00:16:58.668 "trsvcid": "41098" 00:16:58.668 }, 00:16:58.668 "auth": { 00:16:58.668 "state": "completed", 00:16:58.668 "digest": "sha512", 00:16:58.668 "dhgroup": "null" 00:16:58.668 } 00:16:58.668 } 00:16:58.668 ]' 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.668 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.926 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:16:59.491 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.491 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.491 16:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.491 16:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.491 16:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.491 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.491 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:59.491 16:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:59.749 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:59.749 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.749 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.749 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:59.749 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:59.749 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.749 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.749 16:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.749 16:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.749 16:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.749 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.749 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.749 00:17:00.007 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.007 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.007 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.007 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.007 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.007 16:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.007 16:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.007 16:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.007 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.007 { 00:17:00.007 "cntlid": 99, 00:17:00.007 "qid": 0, 00:17:00.007 "state": "enabled", 00:17:00.007 "thread": "nvmf_tgt_poll_group_000", 00:17:00.007 "listen_address": { 00:17:00.007 "trtype": "TCP", 00:17:00.007 "adrfam": "IPv4", 00:17:00.007 "traddr": "10.0.0.2", 00:17:00.007 "trsvcid": "4420" 00:17:00.007 }, 00:17:00.007 "peer_address": { 00:17:00.007 "trtype": "TCP", 00:17:00.007 "adrfam": "IPv4", 00:17:00.007 "traddr": "10.0.0.1", 00:17:00.007 "trsvcid": "41134" 00:17:00.007 }, 00:17:00.007 "auth": { 00:17:00.007 "state": "completed", 00:17:00.007 "digest": "sha512", 00:17:00.007 "dhgroup": "null" 00:17:00.007 } 00:17:00.007 } 00:17:00.007 ]' 00:17:00.007 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.007 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.007 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.265 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:00.265 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.265 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.265 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.265 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.265 16:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:17:00.830 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.830 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.830 16:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.830 16:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.830 16:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.830 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.830 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.830 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:01.088 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:01.088 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.088 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:01.088 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:01.088 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:01.088 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.088 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.088 16:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.088 16:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.088 16:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.088 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.088 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.346 00:17:01.346 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.346 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.346 16:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.603 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.603 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.603 16:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.604 16:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.604 16:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.604 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.604 { 00:17:01.604 "cntlid": 101, 00:17:01.604 "qid": 0, 00:17:01.604 "state": "enabled", 00:17:01.604 "thread": "nvmf_tgt_poll_group_000", 00:17:01.604 "listen_address": { 00:17:01.604 "trtype": "TCP", 00:17:01.604 "adrfam": "IPv4", 00:17:01.604 "traddr": "10.0.0.2", 00:17:01.604 "trsvcid": "4420" 00:17:01.604 }, 00:17:01.604 "peer_address": { 00:17:01.604 "trtype": "TCP", 00:17:01.604 "adrfam": "IPv4", 00:17:01.604 "traddr": "10.0.0.1", 00:17:01.604 "trsvcid": "35492" 00:17:01.604 }, 00:17:01.604 "auth": { 00:17:01.604 "state": "completed", 00:17:01.604 "digest": "sha512", 00:17:01.604 "dhgroup": "null" 00:17:01.604 } 00:17:01.604 } 00:17:01.604 ]' 00:17:01.604 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.604 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.604 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.604 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:01.604 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.604 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.604 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.604 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.861 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:17:02.424 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.424 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.424 16:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.424 16:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.424 16:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.424 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.424 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.424 16:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.682 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:02.682 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.682 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.682 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:02.682 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:02.682 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.682 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:02.682 16:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.682 16:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.682 16:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.682 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.682 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.940 00:17:02.940 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.940 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.940 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.940 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.940 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.940 16:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.940 16:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.197 16:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.197 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.197 { 00:17:03.197 "cntlid": 103, 00:17:03.197 "qid": 0, 00:17:03.197 "state": "enabled", 00:17:03.197 "thread": "nvmf_tgt_poll_group_000", 00:17:03.198 "listen_address": { 00:17:03.198 "trtype": "TCP", 00:17:03.198 "adrfam": "IPv4", 00:17:03.198 "traddr": "10.0.0.2", 00:17:03.198 "trsvcid": "4420" 00:17:03.198 }, 00:17:03.198 "peer_address": { 00:17:03.198 "trtype": "TCP", 00:17:03.198 "adrfam": "IPv4", 00:17:03.198 "traddr": "10.0.0.1", 00:17:03.198 "trsvcid": "35524" 00:17:03.198 }, 00:17:03.198 "auth": { 00:17:03.198 "state": "completed", 00:17:03.198 "digest": "sha512", 00:17:03.198 "dhgroup": "null" 00:17:03.198 } 00:17:03.198 } 00:17:03.198 ]' 00:17:03.198 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.198 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.198 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.198 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:03.198 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.198 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.198 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.198 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.455 16:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:17:04.019 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.019 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.019 16:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.019 16:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.019 16:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.019 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.019 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.019 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:04.019 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:04.276 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:04.276 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.276 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:04.276 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:04.276 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:04.276 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.276 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.276 16:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.276 16:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.276 16:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.276 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.276 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.276 00:17:04.533 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.533 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.533 16:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.533 16:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.533 16:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.533 16:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.533 16:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.533 16:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.533 16:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.533 { 00:17:04.533 "cntlid": 105, 00:17:04.533 "qid": 0, 00:17:04.533 "state": "enabled", 00:17:04.533 "thread": "nvmf_tgt_poll_group_000", 00:17:04.533 "listen_address": { 00:17:04.533 "trtype": "TCP", 00:17:04.533 "adrfam": "IPv4", 00:17:04.533 "traddr": "10.0.0.2", 00:17:04.533 "trsvcid": "4420" 00:17:04.533 }, 00:17:04.533 "peer_address": { 00:17:04.533 "trtype": "TCP", 00:17:04.533 "adrfam": "IPv4", 00:17:04.533 "traddr": "10.0.0.1", 00:17:04.533 "trsvcid": "35550" 00:17:04.533 }, 00:17:04.533 "auth": { 00:17:04.533 "state": "completed", 00:17:04.533 "digest": "sha512", 00:17:04.533 "dhgroup": "ffdhe2048" 00:17:04.533 } 00:17:04.533 } 00:17:04.533 ]' 00:17:04.533 16:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.533 16:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.789 16:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.789 16:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:04.789 16:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.789 16:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.789 16:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.789 16:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.044 16:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.605 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.606 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.861 00:17:05.861 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.861 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.861 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.166 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.166 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.166 16:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.166 16:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.166 16:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.166 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.166 { 00:17:06.166 "cntlid": 107, 00:17:06.166 "qid": 0, 00:17:06.166 "state": "enabled", 00:17:06.166 "thread": "nvmf_tgt_poll_group_000", 00:17:06.166 "listen_address": { 00:17:06.166 "trtype": "TCP", 00:17:06.166 "adrfam": "IPv4", 00:17:06.166 "traddr": "10.0.0.2", 00:17:06.166 "trsvcid": "4420" 00:17:06.166 }, 00:17:06.166 "peer_address": { 00:17:06.166 "trtype": "TCP", 00:17:06.166 "adrfam": "IPv4", 00:17:06.166 "traddr": "10.0.0.1", 00:17:06.166 "trsvcid": "35574" 00:17:06.166 }, 00:17:06.166 "auth": { 00:17:06.166 "state": "completed", 00:17:06.166 "digest": "sha512", 00:17:06.166 "dhgroup": "ffdhe2048" 00:17:06.166 } 00:17:06.166 } 00:17:06.166 ]' 00:17:06.166 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.166 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.166 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.167 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.167 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.167 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.167 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.167 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.423 16:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:17:06.988 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.988 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.988 16:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.988 16:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.988 16:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.988 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.988 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:06.988 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.244 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:07.244 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.244 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.244 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:07.245 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:07.245 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.245 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.245 16:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.245 16:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.245 16:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.245 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.245 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.502 00:17:07.502 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.502 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.502 16:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.502 16:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.502 16:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.502 16:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.502 16:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.502 16:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.503 16:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.503 { 00:17:07.503 "cntlid": 109, 00:17:07.503 "qid": 0, 00:17:07.503 "state": "enabled", 00:17:07.503 "thread": "nvmf_tgt_poll_group_000", 00:17:07.503 "listen_address": { 00:17:07.503 "trtype": "TCP", 00:17:07.503 "adrfam": "IPv4", 00:17:07.503 "traddr": "10.0.0.2", 00:17:07.503 "trsvcid": "4420" 00:17:07.503 }, 00:17:07.503 "peer_address": { 00:17:07.503 "trtype": "TCP", 00:17:07.503 "adrfam": "IPv4", 00:17:07.503 "traddr": "10.0.0.1", 00:17:07.503 "trsvcid": "35616" 00:17:07.503 }, 00:17:07.503 "auth": { 00:17:07.503 "state": "completed", 00:17:07.503 "digest": "sha512", 00:17:07.503 "dhgroup": "ffdhe2048" 00:17:07.503 } 00:17:07.503 } 00:17:07.503 ]' 00:17:07.503 16:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.759 16:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.759 16:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.759 16:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.759 16:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.759 16:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.759 16:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.759 16:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.076 16:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:17:08.360 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.618 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.876 00:17:08.876 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.876 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.876 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.133 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.133 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.133 16:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.133 16:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.133 16:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.133 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.133 { 00:17:09.133 "cntlid": 111, 00:17:09.133 "qid": 0, 00:17:09.133 "state": "enabled", 00:17:09.133 "thread": "nvmf_tgt_poll_group_000", 00:17:09.133 "listen_address": { 00:17:09.133 "trtype": "TCP", 00:17:09.133 "adrfam": "IPv4", 00:17:09.133 "traddr": "10.0.0.2", 00:17:09.133 "trsvcid": "4420" 00:17:09.133 }, 00:17:09.133 "peer_address": { 00:17:09.133 "trtype": "TCP", 00:17:09.133 "adrfam": "IPv4", 00:17:09.133 "traddr": "10.0.0.1", 00:17:09.133 "trsvcid": "35634" 00:17:09.133 }, 00:17:09.133 "auth": { 00:17:09.133 "state": "completed", 00:17:09.133 "digest": "sha512", 00:17:09.133 "dhgroup": "ffdhe2048" 00:17:09.133 } 00:17:09.133 } 00:17:09.134 ]' 00:17:09.134 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.134 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.134 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.134 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.134 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.134 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.134 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.134 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.391 16:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:17:09.955 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.955 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.955 16:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.955 16:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.955 16:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.955 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.955 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.955 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.955 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:10.213 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:10.213 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.213 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:10.213 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:10.213 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:10.213 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.213 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.213 16:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.213 16:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.213 16:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.213 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.213 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.470 00:17:10.470 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.470 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.470 16:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.470 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.470 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.470 16:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.470 16:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.470 16:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.727 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.727 { 00:17:10.727 "cntlid": 113, 00:17:10.727 "qid": 0, 00:17:10.727 "state": "enabled", 00:17:10.727 "thread": "nvmf_tgt_poll_group_000", 00:17:10.727 "listen_address": { 00:17:10.727 "trtype": "TCP", 00:17:10.727 "adrfam": "IPv4", 00:17:10.727 "traddr": "10.0.0.2", 00:17:10.727 "trsvcid": "4420" 00:17:10.727 }, 00:17:10.727 "peer_address": { 00:17:10.727 "trtype": "TCP", 00:17:10.727 "adrfam": "IPv4", 00:17:10.727 "traddr": "10.0.0.1", 00:17:10.727 "trsvcid": "48388" 00:17:10.727 }, 00:17:10.727 "auth": { 00:17:10.727 "state": "completed", 00:17:10.727 "digest": "sha512", 00:17:10.727 "dhgroup": "ffdhe3072" 00:17:10.727 } 00:17:10.727 } 00:17:10.727 ]' 00:17:10.727 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.727 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.727 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.728 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:10.728 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.728 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.728 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.728 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.985 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:17:11.550 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.550 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.550 16:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.550 16:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.550 16:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.550 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.550 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.550 16:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.550 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:11.550 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.550 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:11.550 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:11.550 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:11.550 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.550 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.550 16:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.550 16:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.550 16:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.550 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.550 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.807 00:17:11.807 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.807 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.807 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.065 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.065 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.065 16:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.065 16:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.065 16:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.065 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.065 { 00:17:12.065 "cntlid": 115, 00:17:12.065 "qid": 0, 00:17:12.065 "state": "enabled", 00:17:12.065 "thread": "nvmf_tgt_poll_group_000", 00:17:12.065 "listen_address": { 00:17:12.065 "trtype": "TCP", 00:17:12.065 "adrfam": "IPv4", 00:17:12.065 "traddr": "10.0.0.2", 00:17:12.065 "trsvcid": "4420" 00:17:12.065 }, 00:17:12.065 "peer_address": { 00:17:12.065 "trtype": "TCP", 00:17:12.065 "adrfam": "IPv4", 00:17:12.065 "traddr": "10.0.0.1", 00:17:12.065 "trsvcid": "48412" 00:17:12.065 }, 00:17:12.065 "auth": { 00:17:12.065 "state": "completed", 00:17:12.065 "digest": "sha512", 00:17:12.065 "dhgroup": "ffdhe3072" 00:17:12.065 } 00:17:12.065 } 00:17:12.065 ]' 00:17:12.065 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.065 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.065 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.065 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.065 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.323 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.324 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.324 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.324 16:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:17:12.889 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.889 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.889 16:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.889 16:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.889 16:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.889 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.889 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.889 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.146 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:13.147 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.147 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:13.147 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:13.147 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:13.147 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.147 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.147 16:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.147 16:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.147 16:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.147 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.147 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.404 00:17:13.404 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.404 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.404 16:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.662 { 00:17:13.662 "cntlid": 117, 00:17:13.662 "qid": 0, 00:17:13.662 "state": "enabled", 00:17:13.662 "thread": "nvmf_tgt_poll_group_000", 00:17:13.662 "listen_address": { 00:17:13.662 "trtype": "TCP", 00:17:13.662 "adrfam": "IPv4", 00:17:13.662 "traddr": "10.0.0.2", 00:17:13.662 "trsvcid": "4420" 00:17:13.662 }, 00:17:13.662 "peer_address": { 00:17:13.662 "trtype": "TCP", 00:17:13.662 "adrfam": "IPv4", 00:17:13.662 "traddr": "10.0.0.1", 00:17:13.662 "trsvcid": "48444" 00:17:13.662 }, 00:17:13.662 "auth": { 00:17:13.662 "state": "completed", 00:17:13.662 "digest": "sha512", 00:17:13.662 "dhgroup": "ffdhe3072" 00:17:13.662 } 00:17:13.662 } 00:17:13.662 ]' 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.662 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.919 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:17:14.484 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.484 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.484 16:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.484 16:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.484 16:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.484 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.484 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.484 16:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.741 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:14.741 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.741 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:14.741 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:14.741 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:14.741 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.741 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:14.741 16:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.741 16:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.741 16:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.741 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:14.741 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:14.999 00:17:14.999 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.999 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.999 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.999 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.999 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.999 16:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.999 16:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.999 16:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.999 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.999 { 00:17:14.999 "cntlid": 119, 00:17:14.999 "qid": 0, 00:17:14.999 "state": "enabled", 00:17:14.999 "thread": "nvmf_tgt_poll_group_000", 00:17:14.999 "listen_address": { 00:17:14.999 "trtype": "TCP", 00:17:14.999 "adrfam": "IPv4", 00:17:14.999 "traddr": "10.0.0.2", 00:17:14.999 "trsvcid": "4420" 00:17:14.999 }, 00:17:14.999 "peer_address": { 00:17:14.999 "trtype": "TCP", 00:17:14.999 "adrfam": "IPv4", 00:17:14.999 "traddr": "10.0.0.1", 00:17:14.999 "trsvcid": "48474" 00:17:14.999 }, 00:17:14.999 "auth": { 00:17:14.999 "state": "completed", 00:17:14.999 "digest": "sha512", 00:17:14.999 "dhgroup": "ffdhe3072" 00:17:14.999 } 00:17:14.999 } 00:17:14.999 ]' 00:17:14.999 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.999 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.999 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.256 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.256 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.256 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.256 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.256 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.256 16:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:17:15.820 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.820 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.820 16:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.820 16:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.820 16:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.820 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.820 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.820 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:15.820 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:16.078 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:16.078 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.078 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:16.078 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:16.078 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:16.078 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.078 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.078 16:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.079 16:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.079 16:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.079 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.079 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.336 00:17:16.336 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.336 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.336 16:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.593 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.593 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.593 16:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.593 16:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.593 16:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.593 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.593 { 00:17:16.593 "cntlid": 121, 00:17:16.593 "qid": 0, 00:17:16.593 "state": "enabled", 00:17:16.593 "thread": "nvmf_tgt_poll_group_000", 00:17:16.593 "listen_address": { 00:17:16.593 "trtype": "TCP", 00:17:16.593 "adrfam": "IPv4", 00:17:16.593 "traddr": "10.0.0.2", 00:17:16.593 "trsvcid": "4420" 00:17:16.593 }, 00:17:16.593 "peer_address": { 00:17:16.593 "trtype": "TCP", 00:17:16.593 "adrfam": "IPv4", 00:17:16.593 "traddr": "10.0.0.1", 00:17:16.594 "trsvcid": "48506" 00:17:16.594 }, 00:17:16.594 "auth": { 00:17:16.594 "state": "completed", 00:17:16.594 "digest": "sha512", 00:17:16.594 "dhgroup": "ffdhe4096" 00:17:16.594 } 00:17:16.594 } 00:17:16.594 ]' 00:17:16.594 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.594 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.594 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.594 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:16.594 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.594 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.594 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.594 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.851 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:17:17.416 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.416 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.416 16:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.416 16:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.416 16:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.416 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.416 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.416 16:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.674 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:17.674 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.674 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.674 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:17.674 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:17.674 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.674 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.674 16:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.674 16:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.674 16:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.674 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.674 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.932 00:17:17.932 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.932 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.932 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.190 { 00:17:18.190 "cntlid": 123, 00:17:18.190 "qid": 0, 00:17:18.190 "state": "enabled", 00:17:18.190 "thread": "nvmf_tgt_poll_group_000", 00:17:18.190 "listen_address": { 00:17:18.190 "trtype": "TCP", 00:17:18.190 "adrfam": "IPv4", 00:17:18.190 "traddr": "10.0.0.2", 00:17:18.190 "trsvcid": "4420" 00:17:18.190 }, 00:17:18.190 "peer_address": { 00:17:18.190 "trtype": "TCP", 00:17:18.190 "adrfam": "IPv4", 00:17:18.190 "traddr": "10.0.0.1", 00:17:18.190 "trsvcid": "48528" 00:17:18.190 }, 00:17:18.190 "auth": { 00:17:18.190 "state": "completed", 00:17:18.190 "digest": "sha512", 00:17:18.190 "dhgroup": "ffdhe4096" 00:17:18.190 } 00:17:18.190 } 00:17:18.190 ]' 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.190 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.448 16:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.015 16:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.273 16:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.274 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.274 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.532 00:17:19.532 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.532 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.532 16:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.532 16:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.532 16:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.532 16:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.532 16:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.532 16:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.532 16:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.532 { 00:17:19.532 "cntlid": 125, 00:17:19.532 "qid": 0, 00:17:19.532 "state": "enabled", 00:17:19.532 "thread": "nvmf_tgt_poll_group_000", 00:17:19.532 "listen_address": { 00:17:19.532 "trtype": "TCP", 00:17:19.532 "adrfam": "IPv4", 00:17:19.532 "traddr": "10.0.0.2", 00:17:19.532 "trsvcid": "4420" 00:17:19.532 }, 00:17:19.532 "peer_address": { 00:17:19.532 "trtype": "TCP", 00:17:19.532 "adrfam": "IPv4", 00:17:19.532 "traddr": "10.0.0.1", 00:17:19.532 "trsvcid": "48548" 00:17:19.532 }, 00:17:19.532 "auth": { 00:17:19.532 "state": "completed", 00:17:19.532 "digest": "sha512", 00:17:19.532 "dhgroup": "ffdhe4096" 00:17:19.532 } 00:17:19.532 } 00:17:19.532 ]' 00:17:19.532 16:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.532 16:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.532 16:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.791 16:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.791 16:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.791 16:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.791 16:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.791 16:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.791 16:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:17:20.357 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.615 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.872 00:17:20.872 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.872 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.872 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.131 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.131 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.131 16:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.131 16:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.131 16:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.131 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.131 { 00:17:21.131 "cntlid": 127, 00:17:21.131 "qid": 0, 00:17:21.131 "state": "enabled", 00:17:21.131 "thread": "nvmf_tgt_poll_group_000", 00:17:21.131 "listen_address": { 00:17:21.131 "trtype": "TCP", 00:17:21.131 "adrfam": "IPv4", 00:17:21.131 "traddr": "10.0.0.2", 00:17:21.131 "trsvcid": "4420" 00:17:21.131 }, 00:17:21.131 "peer_address": { 00:17:21.131 "trtype": "TCP", 00:17:21.131 "adrfam": "IPv4", 00:17:21.131 "traddr": "10.0.0.1", 00:17:21.131 "trsvcid": "54134" 00:17:21.131 }, 00:17:21.131 "auth": { 00:17:21.131 "state": "completed", 00:17:21.131 "digest": "sha512", 00:17:21.131 "dhgroup": "ffdhe4096" 00:17:21.131 } 00:17:21.131 } 00:17:21.131 ]' 00:17:21.131 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.131 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.131 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.131 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.131 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.387 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.387 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.387 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.387 16:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:17:21.951 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.951 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.951 16:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.951 16:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.951 16:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.951 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.951 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.951 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:21.951 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:22.238 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:22.238 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.238 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:22.238 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:22.238 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:22.238 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.238 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.238 16:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.238 16:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.238 16:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.239 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.239 16:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.525 00:17:22.525 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.525 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.525 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.782 { 00:17:22.782 "cntlid": 129, 00:17:22.782 "qid": 0, 00:17:22.782 "state": "enabled", 00:17:22.782 "thread": "nvmf_tgt_poll_group_000", 00:17:22.782 "listen_address": { 00:17:22.782 "trtype": "TCP", 00:17:22.782 "adrfam": "IPv4", 00:17:22.782 "traddr": "10.0.0.2", 00:17:22.782 "trsvcid": "4420" 00:17:22.782 }, 00:17:22.782 "peer_address": { 00:17:22.782 "trtype": "TCP", 00:17:22.782 "adrfam": "IPv4", 00:17:22.782 "traddr": "10.0.0.1", 00:17:22.782 "trsvcid": "54162" 00:17:22.782 }, 00:17:22.782 "auth": { 00:17:22.782 "state": "completed", 00:17:22.782 "digest": "sha512", 00:17:22.782 "dhgroup": "ffdhe6144" 00:17:22.782 } 00:17:22.782 } 00:17:22.782 ]' 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.782 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.040 16:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:17:23.605 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.605 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.605 16:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.605 16:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.605 16:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.605 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.605 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:23.605 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:23.862 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:23.862 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.862 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.862 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:23.862 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:23.862 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.862 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.862 16:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.862 16:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.862 16:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.862 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.862 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.119 00:17:24.119 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.119 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.119 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.377 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.377 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.377 16:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.377 16:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.377 16:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.377 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.377 { 00:17:24.377 "cntlid": 131, 00:17:24.377 "qid": 0, 00:17:24.377 "state": "enabled", 00:17:24.377 "thread": "nvmf_tgt_poll_group_000", 00:17:24.377 "listen_address": { 00:17:24.377 "trtype": "TCP", 00:17:24.377 "adrfam": "IPv4", 00:17:24.377 "traddr": "10.0.0.2", 00:17:24.377 "trsvcid": "4420" 00:17:24.377 }, 00:17:24.377 "peer_address": { 00:17:24.377 "trtype": "TCP", 00:17:24.377 "adrfam": "IPv4", 00:17:24.377 "traddr": "10.0.0.1", 00:17:24.377 "trsvcid": "54196" 00:17:24.377 }, 00:17:24.377 "auth": { 00:17:24.377 "state": "completed", 00:17:24.377 "digest": "sha512", 00:17:24.377 "dhgroup": "ffdhe6144" 00:17:24.377 } 00:17:24.377 } 00:17:24.377 ]' 00:17:24.377 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.377 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.377 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.377 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:24.377 16:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.377 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.377 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.377 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.634 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:17:25.200 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.200 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.200 16:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.200 16:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.200 16:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.200 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.200 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:25.200 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:25.458 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:25.458 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.458 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:25.458 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:25.458 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:25.458 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.458 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.458 16:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.458 16:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.458 16:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.458 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.458 16:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.716 00:17:25.716 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.716 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.716 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.974 { 00:17:25.974 "cntlid": 133, 00:17:25.974 "qid": 0, 00:17:25.974 "state": "enabled", 00:17:25.974 "thread": "nvmf_tgt_poll_group_000", 00:17:25.974 "listen_address": { 00:17:25.974 "trtype": "TCP", 00:17:25.974 "adrfam": "IPv4", 00:17:25.974 "traddr": "10.0.0.2", 00:17:25.974 "trsvcid": "4420" 00:17:25.974 }, 00:17:25.974 "peer_address": { 00:17:25.974 "trtype": "TCP", 00:17:25.974 "adrfam": "IPv4", 00:17:25.974 "traddr": "10.0.0.1", 00:17:25.974 "trsvcid": "54228" 00:17:25.974 }, 00:17:25.974 "auth": { 00:17:25.974 "state": "completed", 00:17:25.974 "digest": "sha512", 00:17:25.974 "dhgroup": "ffdhe6144" 00:17:25.974 } 00:17:25.974 } 00:17:25.974 ]' 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.974 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.232 16:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:17:26.835 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.835 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.835 16:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.835 16:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.835 16:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.835 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.835 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.835 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.092 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:27.092 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.092 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:27.092 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:27.092 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:27.092 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.093 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:27.093 16:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.093 16:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.093 16:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.093 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.093 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.349 00:17:27.349 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.349 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.349 16:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.606 { 00:17:27.606 "cntlid": 135, 00:17:27.606 "qid": 0, 00:17:27.606 "state": "enabled", 00:17:27.606 "thread": "nvmf_tgt_poll_group_000", 00:17:27.606 "listen_address": { 00:17:27.606 "trtype": "TCP", 00:17:27.606 "adrfam": "IPv4", 00:17:27.606 "traddr": "10.0.0.2", 00:17:27.606 "trsvcid": "4420" 00:17:27.606 }, 00:17:27.606 "peer_address": { 00:17:27.606 "trtype": "TCP", 00:17:27.606 "adrfam": "IPv4", 00:17:27.606 "traddr": "10.0.0.1", 00:17:27.606 "trsvcid": "54254" 00:17:27.606 }, 00:17:27.606 "auth": { 00:17:27.606 "state": "completed", 00:17:27.606 "digest": "sha512", 00:17:27.606 "dhgroup": "ffdhe6144" 00:17:27.606 } 00:17:27.606 } 00:17:27.606 ]' 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.606 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.863 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:17:28.427 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.427 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.427 16:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.427 16:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.427 16:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.427 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.427 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.427 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:28.427 16:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:28.684 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:28.684 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.684 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:28.684 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:28.685 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:28.685 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.685 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.685 16:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.685 16:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.685 16:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.685 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.685 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.250 00:17:29.250 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.250 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.250 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.250 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.250 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.250 16:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.250 16:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.250 16:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.250 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.250 { 00:17:29.250 "cntlid": 137, 00:17:29.250 "qid": 0, 00:17:29.250 "state": "enabled", 00:17:29.250 "thread": "nvmf_tgt_poll_group_000", 00:17:29.250 "listen_address": { 00:17:29.250 "trtype": "TCP", 00:17:29.250 "adrfam": "IPv4", 00:17:29.250 "traddr": "10.0.0.2", 00:17:29.250 "trsvcid": "4420" 00:17:29.250 }, 00:17:29.250 "peer_address": { 00:17:29.250 "trtype": "TCP", 00:17:29.250 "adrfam": "IPv4", 00:17:29.250 "traddr": "10.0.0.1", 00:17:29.250 "trsvcid": "54278" 00:17:29.250 }, 00:17:29.250 "auth": { 00:17:29.250 "state": "completed", 00:17:29.250 "digest": "sha512", 00:17:29.250 "dhgroup": "ffdhe8192" 00:17:29.250 } 00:17:29.250 } 00:17:29.250 ]' 00:17:29.250 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.250 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.250 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.508 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:29.508 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.508 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.508 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.508 16:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.508 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:17:30.074 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.074 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.074 16:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.074 16:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.074 16:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.074 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.074 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:30.074 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:30.332 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:30.332 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.332 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:30.332 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:30.332 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:30.332 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.332 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.332 16:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.332 16:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.332 16:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.332 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.332 16:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.898 00:17:30.898 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.898 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.898 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.898 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.898 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.898 16:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.898 16:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.156 16:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.156 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.156 { 00:17:31.156 "cntlid": 139, 00:17:31.156 "qid": 0, 00:17:31.156 "state": "enabled", 00:17:31.156 "thread": "nvmf_tgt_poll_group_000", 00:17:31.156 "listen_address": { 00:17:31.156 "trtype": "TCP", 00:17:31.156 "adrfam": "IPv4", 00:17:31.156 "traddr": "10.0.0.2", 00:17:31.156 "trsvcid": "4420" 00:17:31.156 }, 00:17:31.156 "peer_address": { 00:17:31.156 "trtype": "TCP", 00:17:31.156 "adrfam": "IPv4", 00:17:31.156 "traddr": "10.0.0.1", 00:17:31.156 "trsvcid": "48342" 00:17:31.156 }, 00:17:31.156 "auth": { 00:17:31.156 "state": "completed", 00:17:31.156 "digest": "sha512", 00:17:31.156 "dhgroup": "ffdhe8192" 00:17:31.156 } 00:17:31.156 } 00:17:31.156 ]' 00:17:31.156 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.156 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.156 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.156 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:31.156 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.156 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.156 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.156 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.414 16:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTE2ZWE5ZjA0MzUzZDk5NTNjODdiMmEzOTk1NjA5NjhLm+iS: --dhchap-ctrl-secret DHHC-1:02:OTQzZDlkZjMzMWQ2NmIxYjVlMjE0ZGYwZjFkZTVmMDExOTZkMWQzYmI0MDI2MzRi4KLcJg==: 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.980 16:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.545 00:17:32.545 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.545 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.545 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.803 { 00:17:32.803 "cntlid": 141, 00:17:32.803 "qid": 0, 00:17:32.803 "state": "enabled", 00:17:32.803 "thread": "nvmf_tgt_poll_group_000", 00:17:32.803 "listen_address": { 00:17:32.803 "trtype": "TCP", 00:17:32.803 "adrfam": "IPv4", 00:17:32.803 "traddr": "10.0.0.2", 00:17:32.803 "trsvcid": "4420" 00:17:32.803 }, 00:17:32.803 "peer_address": { 00:17:32.803 "trtype": "TCP", 00:17:32.803 "adrfam": "IPv4", 00:17:32.803 "traddr": "10.0.0.1", 00:17:32.803 "trsvcid": "48382" 00:17:32.803 }, 00:17:32.803 "auth": { 00:17:32.803 "state": "completed", 00:17:32.803 "digest": "sha512", 00:17:32.803 "dhgroup": "ffdhe8192" 00:17:32.803 } 00:17:32.803 } 00:17:32.803 ]' 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.803 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.061 16:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTUyY2UxMGE4ZmZlZjc2ZGZjOGVmMGI0MzVkMmY2NTVlMTEzMTU5NTNkM2I0NTZj/huBxg==: --dhchap-ctrl-secret DHHC-1:01:NzRmMjEzNzIwZmJiN2FmYjA5ODdlODRlZjhlN2Y4ZjjiZzQ2: 00:17:33.627 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.627 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.627 16:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.627 16:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.627 16:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.627 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.627 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.627 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.887 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:33.887 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.887 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.887 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:33.887 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:33.887 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.887 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:33.887 16:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.887 16:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.887 16:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.887 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.887 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.453 00:17:34.453 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.453 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.453 16:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.453 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.453 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.453 16:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.453 16:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.453 16:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.453 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.453 { 00:17:34.453 "cntlid": 143, 00:17:34.453 "qid": 0, 00:17:34.453 "state": "enabled", 00:17:34.453 "thread": "nvmf_tgt_poll_group_000", 00:17:34.453 "listen_address": { 00:17:34.453 "trtype": "TCP", 00:17:34.453 "adrfam": "IPv4", 00:17:34.453 "traddr": "10.0.0.2", 00:17:34.453 "trsvcid": "4420" 00:17:34.453 }, 00:17:34.453 "peer_address": { 00:17:34.453 "trtype": "TCP", 00:17:34.453 "adrfam": "IPv4", 00:17:34.453 "traddr": "10.0.0.1", 00:17:34.453 "trsvcid": "48402" 00:17:34.453 }, 00:17:34.453 "auth": { 00:17:34.453 "state": "completed", 00:17:34.453 "digest": "sha512", 00:17:34.453 "dhgroup": "ffdhe8192" 00:17:34.453 } 00:17:34.453 } 00:17:34.453 ]' 00:17:34.453 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.453 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.454 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.711 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.711 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.711 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.711 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.711 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.711 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:17:35.275 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.275 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.275 16:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.275 16:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.275 16:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.275 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:35.275 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:35.275 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:35.275 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.275 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.276 16:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.533 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:35.533 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.533 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:35.533 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:35.533 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:35.533 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.533 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.533 16:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.533 16:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.533 16:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.533 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.533 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.119 00:17:36.119 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.119 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.119 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.119 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.119 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.119 16:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.119 16:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.119 16:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.119 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.119 { 00:17:36.119 "cntlid": 145, 00:17:36.119 "qid": 0, 00:17:36.119 "state": "enabled", 00:17:36.119 "thread": "nvmf_tgt_poll_group_000", 00:17:36.119 "listen_address": { 00:17:36.119 "trtype": "TCP", 00:17:36.119 "adrfam": "IPv4", 00:17:36.119 "traddr": "10.0.0.2", 00:17:36.119 "trsvcid": "4420" 00:17:36.119 }, 00:17:36.119 "peer_address": { 00:17:36.119 "trtype": "TCP", 00:17:36.119 "adrfam": "IPv4", 00:17:36.119 "traddr": "10.0.0.1", 00:17:36.119 "trsvcid": "48418" 00:17:36.119 }, 00:17:36.119 "auth": { 00:17:36.119 "state": "completed", 00:17:36.119 "digest": "sha512", 00:17:36.119 "dhgroup": "ffdhe8192" 00:17:36.119 } 00:17:36.119 } 00:17:36.119 ]' 00:17:36.119 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.377 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.377 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.377 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.377 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.377 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.377 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.377 16:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.637 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmFiMGZmN2RjOGMzOTVjMTI2NTE1NzYwOGU5NWFlMTVkZTk5MTJjNmNhYjdkMzMyl5SDBw==: --dhchap-ctrl-secret DHHC-1:03:ZWVhZGFmNjQ2ZjYxYjI2ZDkyNGI5NmY3MWJjZjQ0Yzc3MjI5ZmZlNmNmYjViYTExNmZkZGY4ODkxMjNmMjdlMgzU7MY=: 00:17:36.975 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:37.232 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:37.489 request: 00:17:37.489 { 00:17:37.489 "name": "nvme0", 00:17:37.489 "trtype": "tcp", 00:17:37.489 "traddr": "10.0.0.2", 00:17:37.489 "adrfam": "ipv4", 00:17:37.489 "trsvcid": "4420", 00:17:37.489 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:37.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:37.489 "prchk_reftag": false, 00:17:37.489 "prchk_guard": false, 00:17:37.489 "hdgst": false, 00:17:37.489 "ddgst": false, 00:17:37.489 "dhchap_key": "key2", 00:17:37.489 "method": "bdev_nvme_attach_controller", 00:17:37.489 "req_id": 1 00:17:37.489 } 00:17:37.489 Got JSON-RPC error response 00:17:37.489 response: 00:17:37.489 { 00:17:37.489 "code": -5, 00:17:37.489 "message": "Input/output error" 00:17:37.489 } 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:37.489 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.490 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:37.490 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.055 request: 00:17:38.055 { 00:17:38.055 "name": "nvme0", 00:17:38.055 "trtype": "tcp", 00:17:38.055 "traddr": "10.0.0.2", 00:17:38.055 "adrfam": "ipv4", 00:17:38.055 "trsvcid": "4420", 00:17:38.055 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:38.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:38.055 "prchk_reftag": false, 00:17:38.055 "prchk_guard": false, 00:17:38.055 "hdgst": false, 00:17:38.055 "ddgst": false, 00:17:38.055 "dhchap_key": "key1", 00:17:38.055 "dhchap_ctrlr_key": "ckey2", 00:17:38.055 "method": "bdev_nvme_attach_controller", 00:17:38.055 "req_id": 1 00:17:38.055 } 00:17:38.055 Got JSON-RPC error response 00:17:38.055 response: 00:17:38.055 { 00:17:38.055 "code": -5, 00:17:38.055 "message": "Input/output error" 00:17:38.055 } 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.055 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.622 request: 00:17:38.622 { 00:17:38.622 "name": "nvme0", 00:17:38.622 "trtype": "tcp", 00:17:38.622 "traddr": "10.0.0.2", 00:17:38.622 "adrfam": "ipv4", 00:17:38.622 "trsvcid": "4420", 00:17:38.622 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:38.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:38.622 "prchk_reftag": false, 00:17:38.622 "prchk_guard": false, 00:17:38.622 "hdgst": false, 00:17:38.622 "ddgst": false, 00:17:38.622 "dhchap_key": "key1", 00:17:38.622 "dhchap_ctrlr_key": "ckey1", 00:17:38.622 "method": "bdev_nvme_attach_controller", 00:17:38.622 "req_id": 1 00:17:38.622 } 00:17:38.622 Got JSON-RPC error response 00:17:38.622 response: 00:17:38.622 { 00:17:38.622 "code": -5, 00:17:38.622 "message": "Input/output error" 00:17:38.622 } 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 75536 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 75536 ']' 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 75536 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75536 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75536' 00:17:38.622 killing process with pid 75536 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 75536 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 75536 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=96357 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 96357 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 96357 ']' 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.622 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 96357 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 96357 ']' 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.556 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.813 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.813 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:39.813 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:39.813 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.813 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.813 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.814 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:39.814 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.814 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.814 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:39.814 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:39.814 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.814 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:39.814 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.814 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.814 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.814 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.814 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.378 00:17:40.378 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.378 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.378 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.635 { 00:17:40.635 "cntlid": 1, 00:17:40.635 "qid": 0, 00:17:40.635 "state": "enabled", 00:17:40.635 "thread": "nvmf_tgt_poll_group_000", 00:17:40.635 "listen_address": { 00:17:40.635 "trtype": "TCP", 00:17:40.635 "adrfam": "IPv4", 00:17:40.635 "traddr": "10.0.0.2", 00:17:40.635 "trsvcid": "4420" 00:17:40.635 }, 00:17:40.635 "peer_address": { 00:17:40.635 "trtype": "TCP", 00:17:40.635 "adrfam": "IPv4", 00:17:40.635 "traddr": "10.0.0.1", 00:17:40.635 "trsvcid": "48476" 00:17:40.635 }, 00:17:40.635 "auth": { 00:17:40.635 "state": "completed", 00:17:40.635 "digest": "sha512", 00:17:40.635 "dhgroup": "ffdhe8192" 00:17:40.635 } 00:17:40.635 } 00:17:40.635 ]' 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.635 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.891 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWZiZTg4NGFkNDhkOTFjMTQ5YmVjMjYyYzNmZmMxMDQ1ODg1YjNmNmYxZTAwZWQyYjU4MzQ4NmJmMDFiNjFjMk+Yj10=: 00:17:41.453 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.453 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.453 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.453 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.453 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.453 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:41.453 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.453 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.453 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.453 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:41.453 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:41.709 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.709 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:41.709 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.709 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:41.709 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.709 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:41.709 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.709 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.709 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.709 request: 00:17:41.709 { 00:17:41.709 "name": "nvme0", 00:17:41.709 "trtype": "tcp", 00:17:41.709 "traddr": "10.0.0.2", 00:17:41.709 "adrfam": "ipv4", 00:17:41.709 "trsvcid": "4420", 00:17:41.709 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:41.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:41.709 "prchk_reftag": false, 00:17:41.709 "prchk_guard": false, 00:17:41.709 "hdgst": false, 00:17:41.709 "ddgst": false, 00:17:41.709 "dhchap_key": "key3", 00:17:41.709 "method": "bdev_nvme_attach_controller", 00:17:41.709 "req_id": 1 00:17:41.709 } 00:17:41.709 Got JSON-RPC error response 00:17:41.709 response: 00:17:41.709 { 00:17:41.709 "code": -5, 00:17:41.709 "message": "Input/output error" 00:17:41.709 } 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.965 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.236 request: 00:17:42.236 { 00:17:42.236 "name": "nvme0", 00:17:42.236 "trtype": "tcp", 00:17:42.236 "traddr": "10.0.0.2", 00:17:42.236 "adrfam": "ipv4", 00:17:42.236 "trsvcid": "4420", 00:17:42.236 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:42.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:42.236 "prchk_reftag": false, 00:17:42.236 "prchk_guard": false, 00:17:42.236 "hdgst": false, 00:17:42.236 "ddgst": false, 00:17:42.236 "dhchap_key": "key3", 00:17:42.236 "method": "bdev_nvme_attach_controller", 00:17:42.236 "req_id": 1 00:17:42.236 } 00:17:42.236 Got JSON-RPC error response 00:17:42.236 response: 00:17:42.236 { 00:17:42.236 "code": -5, 00:17:42.236 "message": "Input/output error" 00:17:42.236 } 00:17:42.236 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:42.236 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:42.236 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:42.236 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:42.236 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:42.236 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:42.236 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:42.236 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.236 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.236 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:42.493 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:42.494 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:42.494 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:42.494 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:42.494 request: 00:17:42.494 { 00:17:42.494 "name": "nvme0", 00:17:42.494 "trtype": "tcp", 00:17:42.494 "traddr": "10.0.0.2", 00:17:42.494 "adrfam": "ipv4", 00:17:42.494 "trsvcid": "4420", 00:17:42.494 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:42.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:42.494 "prchk_reftag": false, 00:17:42.494 "prchk_guard": false, 00:17:42.494 "hdgst": false, 00:17:42.494 "ddgst": false, 00:17:42.494 "dhchap_key": "key0", 00:17:42.494 "dhchap_ctrlr_key": "key1", 00:17:42.494 "method": "bdev_nvme_attach_controller", 00:17:42.494 "req_id": 1 00:17:42.494 } 00:17:42.494 Got JSON-RPC error response 00:17:42.494 response: 00:17:42.494 { 00:17:42.494 "code": -5, 00:17:42.494 "message": "Input/output error" 00:17:42.494 } 00:17:42.494 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:42.494 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:42.494 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:42.494 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:42.494 16:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:42.494 16:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:42.750 00:17:42.750 16:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:42.750 16:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:42.750 16:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.007 16:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.007 16:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.007 16:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.264 16:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:43.264 16:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:43.264 16:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 75778 00:17:43.264 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 75778 ']' 00:17:43.264 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 75778 00:17:43.264 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:43.264 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:43.264 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75778 00:17:43.264 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:43.264 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:43.264 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75778' 00:17:43.264 killing process with pid 75778 00:17:43.264 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 75778 00:17:43.264 16:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 75778 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.523 rmmod nvme_tcp 00:17:43.523 rmmod nvme_fabrics 00:17:43.523 rmmod nvme_keyring 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 96357 ']' 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 96357 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 96357 ']' 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 96357 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:43.523 16:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96357 00:17:43.780 16:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:43.780 16:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:43.780 16:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96357' 00:17:43.780 killing process with pid 96357 00:17:43.780 16:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 96357 00:17:43.780 16:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 96357 00:17:43.780 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:43.780 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:43.780 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:43.780 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.780 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:43.780 16:59:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.780 16:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.780 16:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.307 16:59:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:46.307 16:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.gUq /tmp/spdk.key-sha256.P3O /tmp/spdk.key-sha384.FbJ /tmp/spdk.key-sha512.aVP /tmp/spdk.key-sha512.S5W /tmp/spdk.key-sha384.SQP /tmp/spdk.key-sha256.S0W '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:46.307 00:17:46.307 real 2m11.454s 00:17:46.307 user 5m2.210s 00:17:46.307 sys 0m20.350s 00:17:46.307 16:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.307 16:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.307 ************************************ 00:17:46.307 END TEST nvmf_auth_target 00:17:46.307 ************************************ 00:17:46.307 16:59:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:46.307 16:59:52 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:46.307 16:59:52 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:46.307 16:59:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:46.307 16:59:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.307 16:59:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:46.307 ************************************ 00:17:46.307 START TEST nvmf_bdevio_no_huge 00:17:46.307 ************************************ 00:17:46.307 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:46.307 * Looking for test storage... 00:17:46.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:46.307 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.307 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:46.307 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.307 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.307 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.307 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.307 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.307 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.307 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.307 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.307 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:46.308 16:59:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:51.586 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:51.586 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:51.586 Found net devices under 0000:86:00.0: cvl_0_0 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:51.586 Found net devices under 0000:86:00.1: cvl_0_1 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.586 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:51.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:17:51.587 00:17:51.587 --- 10.0.0.2 ping statistics --- 00:17:51.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.587 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:17:51.587 00:17:51.587 --- 10.0.0.1 ping statistics --- 00:17:51.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.587 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=100501 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 100501 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 100501 ']' 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.587 16:59:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:51.587 [2024-07-15 16:59:57.638798] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:17:51.587 [2024-07-15 16:59:57.638845] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:51.587 [2024-07-15 16:59:57.701448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.587 [2024-07-15 16:59:57.786357] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.587 [2024-07-15 16:59:57.786392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.587 [2024-07-15 16:59:57.786398] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.587 [2024-07-15 16:59:57.786405] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.587 [2024-07-15 16:59:57.786411] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.587 [2024-07-15 16:59:57.786523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:51.587 [2024-07-15 16:59:57.786637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:51.587 [2024-07-15 16:59:57.786742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.587 [2024-07-15 16:59:57.786743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.846 [2024-07-15 16:59:58.473978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.846 Malloc0 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.846 [2024-07-15 16:59:58.510181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.846 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.105 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:52.105 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:52.105 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:52.105 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:52.105 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:52.105 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:52.105 { 00:17:52.105 "params": { 00:17:52.105 "name": "Nvme$subsystem", 00:17:52.105 "trtype": "$TEST_TRANSPORT", 00:17:52.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.105 "adrfam": "ipv4", 00:17:52.105 "trsvcid": "$NVMF_PORT", 00:17:52.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.105 "hdgst": ${hdgst:-false}, 00:17:52.105 "ddgst": ${ddgst:-false} 00:17:52.105 }, 00:17:52.105 "method": "bdev_nvme_attach_controller" 00:17:52.105 } 00:17:52.105 EOF 00:17:52.105 )") 00:17:52.105 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:52.105 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:52.105 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:52.105 16:59:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:52.105 "params": { 00:17:52.105 "name": "Nvme1", 00:17:52.105 "trtype": "tcp", 00:17:52.105 "traddr": "10.0.0.2", 00:17:52.105 "adrfam": "ipv4", 00:17:52.105 "trsvcid": "4420", 00:17:52.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.105 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.105 "hdgst": false, 00:17:52.105 "ddgst": false 00:17:52.105 }, 00:17:52.105 "method": "bdev_nvme_attach_controller" 00:17:52.105 }' 00:17:52.105 [2024-07-15 16:59:58.558383] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:17:52.105 [2024-07-15 16:59:58.558430] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid100646 ] 00:17:52.105 [2024-07-15 16:59:58.616738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:52.105 [2024-07-15 16:59:58.703994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.105 [2024-07-15 16:59:58.704089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.105 [2024-07-15 16:59:58.704091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.363 I/O targets: 00:17:52.363 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:52.363 00:17:52.363 00:17:52.363 CUnit - A unit testing framework for C - Version 2.1-3 00:17:52.363 http://cunit.sourceforge.net/ 00:17:52.363 00:17:52.363 00:17:52.363 Suite: bdevio tests on: Nvme1n1 00:17:52.622 Test: blockdev write read block ...passed 00:17:52.622 Test: blockdev write zeroes read block ...passed 00:17:52.622 Test: blockdev write zeroes read no split ...passed 00:17:52.622 Test: blockdev write zeroes read split ...passed 00:17:52.622 Test: blockdev write zeroes read split partial ...passed 00:17:52.622 Test: blockdev reset ...[2024-07-15 16:59:59.214655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:52.622 [2024-07-15 16:59:59.214716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f1300 (9): Bad file descriptor 00:17:52.881 [2024-07-15 16:59:59.317075] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:52.881 passed 00:17:52.881 Test: blockdev write read 8 blocks ...passed 00:17:52.881 Test: blockdev write read size > 128k ...passed 00:17:52.881 Test: blockdev write read invalid size ...passed 00:17:52.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:52.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:52.881 Test: blockdev write read max offset ...passed 00:17:52.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:52.881 Test: blockdev writev readv 8 blocks ...passed 00:17:52.881 Test: blockdev writev readv 30 x 1block ...passed 00:17:52.881 Test: blockdev writev readv block ...passed 00:17:52.881 Test: blockdev writev readv size > 128k ...passed 00:17:52.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:52.881 Test: blockdev comparev and writev ...[2024-07-15 16:59:59.532377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.881 [2024-07-15 16:59:59.532404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.881 [2024-07-15 16:59:59.532417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.881 [2024-07-15 16:59:59.532424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:52.881 [2024-07-15 16:59:59.532691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.881 [2024-07-15 16:59:59.532700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:52.881 [2024-07-15 16:59:59.532711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.881 [2024-07-15 16:59:59.532718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:52.881 [2024-07-15 16:59:59.532971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.881 [2024-07-15 16:59:59.532980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:52.881 [2024-07-15 16:59:59.532991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.881 [2024-07-15 16:59:59.532998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:52.881 [2024-07-15 16:59:59.533257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.881 [2024-07-15 16:59:59.533268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:52.881 [2024-07-15 16:59:59.533283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.881 [2024-07-15 16:59:59.533290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:53.140 passed 00:17:53.140 Test: blockdev nvme passthru rw ...passed 00:17:53.140 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:59:59.617627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:53.140 [2024-07-15 16:59:59.617642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:53.140 [2024-07-15 16:59:59.617779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:53.140 [2024-07-15 16:59:59.617788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:53.140 [2024-07-15 16:59:59.617919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:53.140 [2024-07-15 16:59:59.617930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:53.140 [2024-07-15 16:59:59.618062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:53.140 [2024-07-15 16:59:59.618071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:53.140 passed 00:17:53.140 Test: blockdev nvme admin passthru ...passed 00:17:53.140 Test: blockdev copy ...passed 00:17:53.140 00:17:53.140 Run Summary: Type Total Ran Passed Failed Inactive 00:17:53.140 suites 1 1 n/a 0 0 00:17:53.140 tests 23 23 23 0 0 00:17:53.140 asserts 152 152 152 0 n/a 00:17:53.140 00:17:53.140 Elapsed time = 1.338 seconds 00:17:53.399 16:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:53.399 16:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.399 16:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:53.399 16:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.399 16:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:53.399 16:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:53.399 16:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.399 16:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:53.399 16:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.399 16:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:53.399 16:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.399 16:59:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.399 rmmod nvme_tcp 00:17:53.399 rmmod nvme_fabrics 00:17:53.399 rmmod nvme_keyring 00:17:53.399 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.399 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:53.399 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:53.399 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 100501 ']' 00:17:53.399 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 100501 00:17:53.399 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 100501 ']' 00:17:53.399 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 100501 00:17:53.400 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:53.400 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.400 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100501 00:17:53.659 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:53.659 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:53.659 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100501' 00:17:53.659 killing process with pid 100501 00:17:53.659 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 100501 00:17:53.659 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 100501 00:17:53.918 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.918 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.918 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.918 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.918 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.918 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.918 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.918 17:00:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.859 17:00:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.859 00:17:55.859 real 0m9.922s 00:17:55.859 user 0m14.076s 00:17:55.859 sys 0m4.582s 00:17:55.859 17:00:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.859 17:00:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:55.859 ************************************ 00:17:55.859 END TEST nvmf_bdevio_no_huge 00:17:55.859 ************************************ 00:17:55.859 17:00:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:55.859 17:00:02 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:55.859 17:00:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:55.859 17:00:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.859 17:00:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.859 ************************************ 00:17:55.859 START TEST nvmf_tls 00:17:55.859 ************************************ 00:17:55.859 17:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:56.117 * Looking for test storage... 00:17:56.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:56.118 17:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:01.389 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:01.389 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:01.389 Found net devices under 0000:86:00.0: cvl_0_0 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:01.389 Found net devices under 0000:86:00.1: cvl_0_1 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:01.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:18:01.389 00:18:01.389 --- 10.0.0.2 ping statistics --- 00:18:01.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.389 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:18:01.389 00:18:01.389 --- 10.0.0.1 ping statistics --- 00:18:01.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.389 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=104515 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 104515 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 104515 ']' 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.389 17:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.389 [2024-07-15 17:00:07.885510] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:01.389 [2024-07-15 17:00:07.885556] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.389 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.389 [2024-07-15 17:00:07.945471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.389 [2024-07-15 17:00:08.027129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.389 [2024-07-15 17:00:08.027163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.389 [2024-07-15 17:00:08.027170] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.389 [2024-07-15 17:00:08.027176] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.389 [2024-07-15 17:00:08.027181] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.389 [2024-07-15 17:00:08.027198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.325 17:00:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.325 17:00:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:02.325 17:00:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:02.325 17:00:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:02.325 17:00:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.325 17:00:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.325 17:00:08 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:02.325 17:00:08 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:02.325 true 00:18:02.325 17:00:08 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:02.325 17:00:08 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:02.584 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:02.584 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:02.584 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:02.843 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:02.843 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:02.843 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:02.843 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:02.843 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:03.101 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:03.101 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:03.359 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:03.359 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:03.359 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:03.359 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:03.359 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:03.359 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:03.359 17:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:03.617 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:03.617 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:03.874 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:03.874 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:03.874 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:03.874 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:03.874 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.JDqONlHdxr 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.YMq2QTIsYv 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.JDqONlHdxr 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.YMq2QTIsYv 00:18:04.133 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:04.397 17:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:04.659 17:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.JDqONlHdxr 00:18:04.659 17:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JDqONlHdxr 00:18:04.659 17:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:04.916 [2024-07-15 17:00:11.353152] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.917 17:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:04.917 17:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:05.174 [2024-07-15 17:00:11.726112] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.174 [2024-07-15 17:00:11.726315] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.174 17:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:05.430 malloc0 00:18:05.430 17:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:05.430 17:00:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JDqONlHdxr 00:18:05.688 [2024-07-15 17:00:12.251653] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:05.688 17:00:12 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.JDqONlHdxr 00:18:05.688 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.876 Initializing NVMe Controllers 00:18:17.876 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:17.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:17.876 Initialization complete. Launching workers. 00:18:17.876 ======================================================== 00:18:17.876 Latency(us) 00:18:17.876 Device Information : IOPS MiB/s Average min max 00:18:17.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16554.43 64.67 3866.45 848.14 4715.97 00:18:17.876 ======================================================== 00:18:17.876 Total : 16554.43 64.67 3866.45 848.14 4715.97 00:18:17.876 00:18:17.876 17:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JDqONlHdxr 00:18:17.876 17:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.876 17:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:17.876 17:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:17.876 17:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JDqONlHdxr' 00:18:17.876 17:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.876 17:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=107256 00:18:17.876 17:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.876 17:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.876 17:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 107256 /var/tmp/bdevperf.sock 00:18:17.876 17:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 107256 ']' 00:18:17.876 17:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.876 17:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.877 17:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.877 17:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.877 17:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.877 [2024-07-15 17:00:22.421498] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:17.877 [2024-07-15 17:00:22.421545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107256 ] 00:18:17.877 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.877 [2024-07-15 17:00:22.471195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.877 [2024-07-15 17:00:22.549269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.877 17:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.877 17:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:17.877 17:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JDqONlHdxr 00:18:17.877 [2024-07-15 17:00:23.387182] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.877 [2024-07-15 17:00:23.387247] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:17.877 TLSTESTn1 00:18:17.877 17:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:17.877 Running I/O for 10 seconds... 00:18:27.849 00:18:27.849 Latency(us) 00:18:27.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.849 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:27.849 Verification LBA range: start 0x0 length 0x2000 00:18:27.849 TLSTESTn1 : 10.02 4724.70 18.46 0.00 0.00 27043.12 6126.19 43310.75 00:18:27.849 =================================================================================================================== 00:18:27.849 Total : 4724.70 18.46 0.00 0.00 27043.12 6126.19 43310.75 00:18:27.849 0 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 107256 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 107256 ']' 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 107256 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107256 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107256' 00:18:27.849 killing process with pid 107256 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 107256 00:18:27.849 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.849 00:18:27.849 Latency(us) 00:18:27.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.849 =================================================================================================================== 00:18:27.849 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.849 [2024-07-15 17:00:33.676245] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 107256 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YMq2QTIsYv 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YMq2QTIsYv 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YMq2QTIsYv 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.849 17:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:27.850 17:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:27.850 17:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YMq2QTIsYv' 00:18:27.850 17:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.850 17:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=109100 00:18:27.850 17:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.850 17:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.850 17:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 109100 /var/tmp/bdevperf.sock 00:18:27.850 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 109100 ']' 00:18:27.850 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.850 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.850 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.850 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.850 17:00:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.850 [2024-07-15 17:00:33.903633] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:27.850 [2024-07-15 17:00:33.903682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109100 ] 00:18:27.850 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.850 [2024-07-15 17:00:33.953415] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.850 [2024-07-15 17:00:34.024633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YMq2QTIsYv 00:18:27.850 [2024-07-15 17:00:34.277737] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.850 [2024-07-15 17:00:34.277803] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:27.850 [2024-07-15 17:00:34.286785] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:27.850 [2024-07-15 17:00:34.287007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220c570 (107): Transport endpoint is not connected 00:18:27.850 [2024-07-15 17:00:34.287998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220c570 (9): Bad file descriptor 00:18:27.850 [2024-07-15 17:00:34.289000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:27.850 [2024-07-15 17:00:34.289010] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:27.850 [2024-07-15 17:00:34.289019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:27.850 request: 00:18:27.850 { 00:18:27.850 "name": "TLSTEST", 00:18:27.850 "trtype": "tcp", 00:18:27.850 "traddr": "10.0.0.2", 00:18:27.850 "adrfam": "ipv4", 00:18:27.850 "trsvcid": "4420", 00:18:27.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.850 "prchk_reftag": false, 00:18:27.850 "prchk_guard": false, 00:18:27.850 "hdgst": false, 00:18:27.850 "ddgst": false, 00:18:27.850 "psk": "/tmp/tmp.YMq2QTIsYv", 00:18:27.850 "method": "bdev_nvme_attach_controller", 00:18:27.850 "req_id": 1 00:18:27.850 } 00:18:27.850 Got JSON-RPC error response 00:18:27.850 response: 00:18:27.850 { 00:18:27.850 "code": -5, 00:18:27.850 "message": "Input/output error" 00:18:27.850 } 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 109100 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 109100 ']' 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 109100 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109100 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109100' 00:18:27.850 killing process with pid 109100 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 109100 00:18:27.850 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.850 00:18:27.850 Latency(us) 00:18:27.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.850 =================================================================================================================== 00:18:27.850 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.850 [2024-07-15 17:00:34.346845] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:27.850 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 109100 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.JDqONlHdxr 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.JDqONlHdxr 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.JDqONlHdxr 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JDqONlHdxr' 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=109320 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 109320 /var/tmp/bdevperf.sock 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 109320 ']' 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.113 17:00:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.113 [2024-07-15 17:00:34.572776] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:28.113 [2024-07-15 17:00:34.572825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109320 ] 00:18:28.113 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.113 [2024-07-15 17:00:34.623799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.113 [2024-07-15 17:00:34.692093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.716 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.716 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:28.716 17:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.JDqONlHdxr 00:18:28.974 [2024-07-15 17:00:35.538540] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.974 [2024-07-15 17:00:35.538611] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:28.974 [2024-07-15 17:00:35.549872] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:28.975 [2024-07-15 17:00:35.549894] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:28.975 [2024-07-15 17:00:35.549916] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:28.975 [2024-07-15 17:00:35.550762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9d570 (107): Transport endpoint is not connected 00:18:28.975 [2024-07-15 17:00:35.551755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9d570 (9): Bad file descriptor 00:18:28.975 [2024-07-15 17:00:35.552756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:28.975 [2024-07-15 17:00:35.552766] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:28.975 [2024-07-15 17:00:35.552775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:28.975 request: 00:18:28.975 { 00:18:28.975 "name": "TLSTEST", 00:18:28.975 "trtype": "tcp", 00:18:28.975 "traddr": "10.0.0.2", 00:18:28.975 "adrfam": "ipv4", 00:18:28.975 "trsvcid": "4420", 00:18:28.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.975 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:28.975 "prchk_reftag": false, 00:18:28.975 "prchk_guard": false, 00:18:28.975 "hdgst": false, 00:18:28.975 "ddgst": false, 00:18:28.975 "psk": "/tmp/tmp.JDqONlHdxr", 00:18:28.975 "method": "bdev_nvme_attach_controller", 00:18:28.975 "req_id": 1 00:18:28.975 } 00:18:28.975 Got JSON-RPC error response 00:18:28.975 response: 00:18:28.975 { 00:18:28.975 "code": -5, 00:18:28.975 "message": "Input/output error" 00:18:28.975 } 00:18:28.975 17:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 109320 00:18:28.975 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 109320 ']' 00:18:28.975 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 109320 00:18:28.975 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:28.975 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.975 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109320 00:18:28.975 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:28.975 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:28.975 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109320' 00:18:28.975 killing process with pid 109320 00:18:28.975 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 109320 00:18:28.975 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.975 00:18:28.975 Latency(us) 00:18:28.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.975 =================================================================================================================== 00:18:28.975 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:28.975 [2024-07-15 17:00:35.618553] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:28.975 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 109320 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.JDqONlHdxr 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.JDqONlHdxr 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.JDqONlHdxr 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JDqONlHdxr' 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=109564 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 109564 /var/tmp/bdevperf.sock 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 109564 ']' 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.233 17:00:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.233 [2024-07-15 17:00:35.838429] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:29.233 [2024-07-15 17:00:35.838476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109564 ] 00:18:29.233 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.233 [2024-07-15 17:00:35.888209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.490 [2024-07-15 17:00:35.956634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.064 17:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.064 17:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:30.064 17:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JDqONlHdxr 00:18:30.323 [2024-07-15 17:00:36.803105] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.323 [2024-07-15 17:00:36.803180] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:30.323 [2024-07-15 17:00:36.813094] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:30.323 [2024-07-15 17:00:36.813116] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:30.323 [2024-07-15 17:00:36.813140] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:30.323 [2024-07-15 17:00:36.813257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92f570 (107): Transport endpoint is not connected 00:18:30.323 [2024-07-15 17:00:36.814250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92f570 (9): Bad file descriptor 00:18:30.323 [2024-07-15 17:00:36.815251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:30.323 [2024-07-15 17:00:36.815262] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:30.323 [2024-07-15 17:00:36.815271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:30.323 request: 00:18:30.323 { 00:18:30.323 "name": "TLSTEST", 00:18:30.323 "trtype": "tcp", 00:18:30.323 "traddr": "10.0.0.2", 00:18:30.323 "adrfam": "ipv4", 00:18:30.324 "trsvcid": "4420", 00:18:30.324 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:30.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.324 "prchk_reftag": false, 00:18:30.324 "prchk_guard": false, 00:18:30.324 "hdgst": false, 00:18:30.324 "ddgst": false, 00:18:30.324 "psk": "/tmp/tmp.JDqONlHdxr", 00:18:30.324 "method": "bdev_nvme_attach_controller", 00:18:30.324 "req_id": 1 00:18:30.324 } 00:18:30.324 Got JSON-RPC error response 00:18:30.324 response: 00:18:30.324 { 00:18:30.324 "code": -5, 00:18:30.324 "message": "Input/output error" 00:18:30.324 } 00:18:30.324 17:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 109564 00:18:30.324 17:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 109564 ']' 00:18:30.324 17:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 109564 00:18:30.324 17:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:30.324 17:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.324 17:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109564 00:18:30.324 17:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:30.324 17:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:30.324 17:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109564' 00:18:30.324 killing process with pid 109564 00:18:30.324 17:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 109564 00:18:30.324 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.324 00:18:30.324 Latency(us) 00:18:30.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.324 =================================================================================================================== 00:18:30.324 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.324 [2024-07-15 17:00:36.880807] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:30.324 17:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 109564 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=109803 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 109803 /var/tmp/bdevperf.sock 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 109803 ']' 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:30.582 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.582 [2024-07-15 17:00:37.099794] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:30.582 [2024-07-15 17:00:37.099843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109803 ] 00:18:30.582 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.582 [2024-07-15 17:00:37.149802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.582 [2024-07-15 17:00:37.216949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.512 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.512 17:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:31.512 17:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:31.512 [2024-07-15 17:00:38.057497] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:31.512 [2024-07-15 17:00:38.059371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bfaf0 (9): Bad file descriptor 00:18:31.512 [2024-07-15 17:00:38.060369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:31.512 [2024-07-15 17:00:38.060379] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:31.512 [2024-07-15 17:00:38.060387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:31.512 request: 00:18:31.512 { 00:18:31.512 "name": "TLSTEST", 00:18:31.512 "trtype": "tcp", 00:18:31.512 "traddr": "10.0.0.2", 00:18:31.512 "adrfam": "ipv4", 00:18:31.512 "trsvcid": "4420", 00:18:31.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.512 "prchk_reftag": false, 00:18:31.512 "prchk_guard": false, 00:18:31.512 "hdgst": false, 00:18:31.512 "ddgst": false, 00:18:31.512 "method": "bdev_nvme_attach_controller", 00:18:31.512 "req_id": 1 00:18:31.512 } 00:18:31.512 Got JSON-RPC error response 00:18:31.512 response: 00:18:31.512 { 00:18:31.512 "code": -5, 00:18:31.512 "message": "Input/output error" 00:18:31.512 } 00:18:31.512 17:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 109803 00:18:31.512 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 109803 ']' 00:18:31.513 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 109803 00:18:31.513 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:31.513 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.513 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109803 00:18:31.513 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:31.513 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:31.513 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109803' 00:18:31.513 killing process with pid 109803 00:18:31.513 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 109803 00:18:31.513 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.513 00:18:31.513 Latency(us) 00:18:31.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.513 =================================================================================================================== 00:18:31.513 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:31.513 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 109803 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 104515 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 104515 ']' 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 104515 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104515 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104515' 00:18:31.782 killing process with pid 104515 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 104515 00:18:31.782 [2024-07-15 17:00:38.345005] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:31.782 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 104515 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.1Bh9UJOzOm 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.1Bh9UJOzOm 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=110050 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 110050 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 110050 ']' 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.039 17:00:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.039 [2024-07-15 17:00:38.632150] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:32.040 [2024-07-15 17:00:38.632198] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.040 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.040 [2024-07-15 17:00:38.689289] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.298 [2024-07-15 17:00:38.768069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.298 [2024-07-15 17:00:38.768109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.298 [2024-07-15 17:00:38.768118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.298 [2024-07-15 17:00:38.768124] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.298 [2024-07-15 17:00:38.768129] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.298 [2024-07-15 17:00:38.768164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.862 17:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.863 17:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:32.863 17:00:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.863 17:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.863 17:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.863 17:00:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.863 17:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.1Bh9UJOzOm 00:18:32.863 17:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1Bh9UJOzOm 00:18:32.863 17:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:33.120 [2024-07-15 17:00:39.638364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.120 17:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:33.378 17:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:33.378 [2024-07-15 17:00:39.979236] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:33.378 [2024-07-15 17:00:39.979437] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.378 17:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:33.635 malloc0 00:18:33.635 17:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Bh9UJOzOm 00:18:33.893 [2024-07-15 17:00:40.496652] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1Bh9UJOzOm 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1Bh9UJOzOm' 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=110327 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 110327 /var/tmp/bdevperf.sock 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 110327 ']' 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.893 17:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.151 [2024-07-15 17:00:40.562330] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:34.151 [2024-07-15 17:00:40.562376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110327 ] 00:18:34.151 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.151 [2024-07-15 17:00:40.612537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.151 [2024-07-15 17:00:40.689134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.716 17:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.716 17:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:34.716 17:00:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Bh9UJOzOm 00:18:34.973 [2024-07-15 17:00:41.512307] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.974 [2024-07-15 17:00:41.512376] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:34.974 TLSTESTn1 00:18:34.974 17:00:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:35.231 Running I/O for 10 seconds... 00:18:45.195 00:18:45.195 Latency(us) 00:18:45.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.195 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:45.195 Verification LBA range: start 0x0 length 0x2000 00:18:45.195 TLSTESTn1 : 10.01 5526.77 21.59 0.00 0.00 23123.32 4758.48 31685.23 00:18:45.195 =================================================================================================================== 00:18:45.195 Total : 5526.77 21.59 0.00 0.00 23123.32 4758.48 31685.23 00:18:45.195 0 00:18:45.195 17:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:45.195 17:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 110327 00:18:45.195 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 110327 ']' 00:18:45.195 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 110327 00:18:45.195 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:45.195 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:45.195 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 110327 00:18:45.195 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:45.195 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:45.195 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 110327' 00:18:45.195 killing process with pid 110327 00:18:45.195 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 110327 00:18:45.195 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.195 00:18:45.195 Latency(us) 00:18:45.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.195 =================================================================================================================== 00:18:45.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.195 [2024-07-15 17:00:51.779287] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:45.195 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 110327 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.1Bh9UJOzOm 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1Bh9UJOzOm 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1Bh9UJOzOm 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1Bh9UJOzOm 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1Bh9UJOzOm' 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=112168 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 112168 /var/tmp/bdevperf.sock 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 112168 ']' 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.454 17:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.454 [2024-07-15 17:00:52.011983] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:45.454 [2024-07-15 17:00:52.012033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112168 ] 00:18:45.454 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.454 [2024-07-15 17:00:52.061924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.713 [2024-07-15 17:00:52.130522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.280 17:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.280 17:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:46.280 17:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Bh9UJOzOm 00:18:46.538 [2024-07-15 17:00:52.968375] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.538 [2024-07-15 17:00:52.968423] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:46.538 [2024-07-15 17:00:52.968431] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.1Bh9UJOzOm 00:18:46.538 request: 00:18:46.538 { 00:18:46.538 "name": "TLSTEST", 00:18:46.538 "trtype": "tcp", 00:18:46.538 "traddr": "10.0.0.2", 00:18:46.538 "adrfam": "ipv4", 00:18:46.538 "trsvcid": "4420", 00:18:46.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.538 "prchk_reftag": false, 00:18:46.538 "prchk_guard": false, 00:18:46.538 "hdgst": false, 00:18:46.538 "ddgst": false, 00:18:46.538 "psk": "/tmp/tmp.1Bh9UJOzOm", 00:18:46.538 "method": "bdev_nvme_attach_controller", 00:18:46.538 "req_id": 1 00:18:46.538 } 00:18:46.538 Got JSON-RPC error response 00:18:46.538 response: 00:18:46.538 { 00:18:46.538 "code": -1, 00:18:46.538 "message": "Operation not permitted" 00:18:46.538 } 00:18:46.538 17:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 112168 00:18:46.538 17:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 112168 ']' 00:18:46.538 17:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 112168 00:18:46.538 17:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:46.538 17:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.538 17:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112168 00:18:46.538 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:46.538 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:46.538 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112168' 00:18:46.538 killing process with pid 112168 00:18:46.538 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 112168 00:18:46.538 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.538 00:18:46.538 Latency(us) 00:18:46.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.538 =================================================================================================================== 00:18:46.538 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:46.538 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 112168 00:18:46.538 17:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:46.538 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:46.538 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:46.538 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:46.538 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:46.538 17:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 110050 00:18:46.538 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 110050 ']' 00:18:46.538 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 110050 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 110050 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 110050' 00:18:46.796 killing process with pid 110050 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 110050 00:18:46.796 [2024-07-15 17:00:53.251314] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 110050 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=112408 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 112408 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 112408 ']' 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.796 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.797 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.797 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.797 17:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.053 [2024-07-15 17:00:53.498477] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:47.053 [2024-07-15 17:00:53.498527] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.053 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.053 [2024-07-15 17:00:53.557803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.053 [2024-07-15 17:00:53.634634] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.053 [2024-07-15 17:00:53.634674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.053 [2024-07-15 17:00:53.634681] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.053 [2024-07-15 17:00:53.634687] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.053 [2024-07-15 17:00:53.634693] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.053 [2024-07-15 17:00:53.634711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.986 17:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.986 17:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:47.986 17:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:47.986 17:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:47.986 17:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.986 17:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.987 17:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.1Bh9UJOzOm 00:18:47.987 17:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:47.987 17:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.1Bh9UJOzOm 00:18:47.987 17:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:47.987 17:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.987 17:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:47.987 17:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.987 17:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.1Bh9UJOzOm 00:18:47.987 17:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1Bh9UJOzOm 00:18:47.987 17:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:47.987 [2024-07-15 17:00:54.490728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.987 17:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:48.244 17:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:48.244 [2024-07-15 17:00:54.831594] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:48.244 [2024-07-15 17:00:54.831772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.244 17:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:48.501 malloc0 00:18:48.501 17:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Bh9UJOzOm 00:18:48.760 [2024-07-15 17:00:55.349102] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:48.760 [2024-07-15 17:00:55.349130] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:48.760 [2024-07-15 17:00:55.349167] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:48.760 request: 00:18:48.760 { 00:18:48.760 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.760 "host": "nqn.2016-06.io.spdk:host1", 00:18:48.760 "psk": "/tmp/tmp.1Bh9UJOzOm", 00:18:48.760 "method": "nvmf_subsystem_add_host", 00:18:48.760 "req_id": 1 00:18:48.760 } 00:18:48.760 Got JSON-RPC error response 00:18:48.760 response: 00:18:48.760 { 00:18:48.760 "code": -32603, 00:18:48.760 "message": "Internal error" 00:18:48.760 } 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 112408 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 112408 ']' 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 112408 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112408 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112408' 00:18:48.760 killing process with pid 112408 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 112408 00:18:48.760 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 112408 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.1Bh9UJOzOm 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=112894 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 112894 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 112894 ']' 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.017 17:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.017 [2024-07-15 17:00:55.648690] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:49.017 [2024-07-15 17:00:55.648735] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.017 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.274 [2024-07-15 17:00:55.705925] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.274 [2024-07-15 17:00:55.784137] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.274 [2024-07-15 17:00:55.784175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.274 [2024-07-15 17:00:55.784182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.274 [2024-07-15 17:00:55.784191] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.274 [2024-07-15 17:00:55.784196] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.274 [2024-07-15 17:00:55.784214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.870 17:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.870 17:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:49.870 17:00:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:49.870 17:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:49.870 17:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.870 17:00:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.870 17:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.1Bh9UJOzOm 00:18:49.870 17:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1Bh9UJOzOm 00:18:49.870 17:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:50.128 [2024-07-15 17:00:56.630193] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.128 17:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:50.386 17:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:50.386 [2024-07-15 17:00:56.967042] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.386 [2024-07-15 17:00:56.967254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.386 17:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:50.644 malloc0 00:18:50.644 17:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:50.902 17:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Bh9UJOzOm 00:18:50.902 [2024-07-15 17:00:57.472382] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:50.902 17:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:50.902 17:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=113155 00:18:50.902 17:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:50.902 17:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 113155 /var/tmp/bdevperf.sock 00:18:50.902 17:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 113155 ']' 00:18:50.902 17:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.902 17:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.902 17:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.902 17:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.902 17:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.902 [2024-07-15 17:00:57.534650] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:50.902 [2024-07-15 17:00:57.534700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113155 ] 00:18:50.902 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.161 [2024-07-15 17:00:57.586688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.161 [2024-07-15 17:00:57.664955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.729 17:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.729 17:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:51.729 17:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Bh9UJOzOm 00:18:52.022 [2024-07-15 17:00:58.484279] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.022 [2024-07-15 17:00:58.484369] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:52.022 TLSTESTn1 00:18:52.022 17:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:52.284 17:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:52.284 "subsystems": [ 00:18:52.284 { 00:18:52.284 "subsystem": "keyring", 00:18:52.284 "config": [] 00:18:52.284 }, 00:18:52.284 { 00:18:52.284 "subsystem": "iobuf", 00:18:52.284 "config": [ 00:18:52.284 { 00:18:52.284 "method": "iobuf_set_options", 00:18:52.284 "params": { 00:18:52.284 "small_pool_count": 8192, 00:18:52.284 "large_pool_count": 1024, 00:18:52.284 "small_bufsize": 8192, 00:18:52.284 "large_bufsize": 135168 00:18:52.284 } 00:18:52.284 } 00:18:52.284 ] 00:18:52.284 }, 00:18:52.284 { 00:18:52.284 "subsystem": "sock", 00:18:52.284 "config": [ 00:18:52.284 { 00:18:52.284 "method": "sock_set_default_impl", 00:18:52.284 "params": { 00:18:52.284 "impl_name": "posix" 00:18:52.284 } 00:18:52.284 }, 00:18:52.284 { 00:18:52.284 "method": "sock_impl_set_options", 00:18:52.284 "params": { 00:18:52.284 "impl_name": "ssl", 00:18:52.284 "recv_buf_size": 4096, 00:18:52.284 "send_buf_size": 4096, 00:18:52.284 "enable_recv_pipe": true, 00:18:52.284 "enable_quickack": false, 00:18:52.284 "enable_placement_id": 0, 00:18:52.284 "enable_zerocopy_send_server": true, 00:18:52.284 "enable_zerocopy_send_client": false, 00:18:52.284 "zerocopy_threshold": 0, 00:18:52.284 "tls_version": 0, 00:18:52.284 "enable_ktls": false 00:18:52.284 } 00:18:52.284 }, 00:18:52.284 { 00:18:52.284 "method": "sock_impl_set_options", 00:18:52.284 "params": { 00:18:52.284 "impl_name": "posix", 00:18:52.284 "recv_buf_size": 2097152, 00:18:52.284 "send_buf_size": 2097152, 00:18:52.284 "enable_recv_pipe": true, 00:18:52.284 "enable_quickack": false, 00:18:52.284 "enable_placement_id": 0, 00:18:52.284 "enable_zerocopy_send_server": true, 00:18:52.284 "enable_zerocopy_send_client": false, 00:18:52.284 "zerocopy_threshold": 0, 00:18:52.284 "tls_version": 0, 00:18:52.284 "enable_ktls": false 00:18:52.284 } 00:18:52.284 } 00:18:52.284 ] 00:18:52.284 }, 00:18:52.284 { 00:18:52.284 "subsystem": "vmd", 00:18:52.284 "config": [] 00:18:52.284 }, 00:18:52.284 { 00:18:52.284 "subsystem": "accel", 00:18:52.284 "config": [ 00:18:52.284 { 00:18:52.284 "method": "accel_set_options", 00:18:52.284 "params": { 00:18:52.284 "small_cache_size": 128, 00:18:52.284 "large_cache_size": 16, 00:18:52.284 "task_count": 2048, 00:18:52.284 "sequence_count": 2048, 00:18:52.284 "buf_count": 2048 00:18:52.284 } 00:18:52.284 } 00:18:52.284 ] 00:18:52.284 }, 00:18:52.284 { 00:18:52.284 "subsystem": "bdev", 00:18:52.284 "config": [ 00:18:52.284 { 00:18:52.284 "method": "bdev_set_options", 00:18:52.284 "params": { 00:18:52.284 "bdev_io_pool_size": 65535, 00:18:52.284 "bdev_io_cache_size": 256, 00:18:52.284 "bdev_auto_examine": true, 00:18:52.284 "iobuf_small_cache_size": 128, 00:18:52.284 "iobuf_large_cache_size": 16 00:18:52.284 } 00:18:52.284 }, 00:18:52.284 { 00:18:52.284 "method": "bdev_raid_set_options", 00:18:52.284 "params": { 00:18:52.284 "process_window_size_kb": 1024 00:18:52.284 } 00:18:52.284 }, 00:18:52.284 { 00:18:52.284 "method": "bdev_iscsi_set_options", 00:18:52.284 "params": { 00:18:52.284 "timeout_sec": 30 00:18:52.284 } 00:18:52.284 }, 00:18:52.284 { 00:18:52.284 "method": "bdev_nvme_set_options", 00:18:52.284 "params": { 00:18:52.284 "action_on_timeout": "none", 00:18:52.284 "timeout_us": 0, 00:18:52.284 "timeout_admin_us": 0, 00:18:52.284 "keep_alive_timeout_ms": 10000, 00:18:52.284 "arbitration_burst": 0, 00:18:52.284 "low_priority_weight": 0, 00:18:52.284 "medium_priority_weight": 0, 00:18:52.284 "high_priority_weight": 0, 00:18:52.284 "nvme_adminq_poll_period_us": 10000, 00:18:52.284 "nvme_ioq_poll_period_us": 0, 00:18:52.284 "io_queue_requests": 0, 00:18:52.284 "delay_cmd_submit": true, 00:18:52.284 "transport_retry_count": 4, 00:18:52.284 "bdev_retry_count": 3, 00:18:52.284 "transport_ack_timeout": 0, 00:18:52.284 "ctrlr_loss_timeout_sec": 0, 00:18:52.284 "reconnect_delay_sec": 0, 00:18:52.284 "fast_io_fail_timeout_sec": 0, 00:18:52.284 "disable_auto_failback": false, 00:18:52.284 "generate_uuids": false, 00:18:52.284 "transport_tos": 0, 00:18:52.284 "nvme_error_stat": false, 00:18:52.284 "rdma_srq_size": 0, 00:18:52.284 "io_path_stat": false, 00:18:52.284 "allow_accel_sequence": false, 00:18:52.284 "rdma_max_cq_size": 0, 00:18:52.284 "rdma_cm_event_timeout_ms": 0, 00:18:52.284 "dhchap_digests": [ 00:18:52.284 "sha256", 00:18:52.284 "sha384", 00:18:52.284 "sha512" 00:18:52.284 ], 00:18:52.284 "dhchap_dhgroups": [ 00:18:52.284 "null", 00:18:52.284 "ffdhe2048", 00:18:52.284 "ffdhe3072", 00:18:52.284 "ffdhe4096", 00:18:52.284 "ffdhe6144", 00:18:52.284 "ffdhe8192" 00:18:52.284 ] 00:18:52.284 } 00:18:52.284 }, 00:18:52.284 { 00:18:52.284 "method": "bdev_nvme_set_hotplug", 00:18:52.284 "params": { 00:18:52.284 "period_us": 100000, 00:18:52.284 "enable": false 00:18:52.284 } 00:18:52.284 }, 00:18:52.284 { 00:18:52.284 "method": "bdev_malloc_create", 00:18:52.284 "params": { 00:18:52.284 "name": "malloc0", 00:18:52.284 "num_blocks": 8192, 00:18:52.284 "block_size": 4096, 00:18:52.284 "physical_block_size": 4096, 00:18:52.284 "uuid": "7a3b21f1-dd4b-474a-ba12-041efe1d4472", 00:18:52.285 "optimal_io_boundary": 0 00:18:52.285 } 00:18:52.285 }, 00:18:52.285 { 00:18:52.285 "method": "bdev_wait_for_examine" 00:18:52.285 } 00:18:52.285 ] 00:18:52.285 }, 00:18:52.285 { 00:18:52.285 "subsystem": "nbd", 00:18:52.285 "config": [] 00:18:52.285 }, 00:18:52.285 { 00:18:52.285 "subsystem": "scheduler", 00:18:52.285 "config": [ 00:18:52.285 { 00:18:52.285 "method": "framework_set_scheduler", 00:18:52.285 "params": { 00:18:52.285 "name": "static" 00:18:52.285 } 00:18:52.285 } 00:18:52.285 ] 00:18:52.285 }, 00:18:52.285 { 00:18:52.285 "subsystem": "nvmf", 00:18:52.285 "config": [ 00:18:52.285 { 00:18:52.285 "method": "nvmf_set_config", 00:18:52.285 "params": { 00:18:52.285 "discovery_filter": "match_any", 00:18:52.285 "admin_cmd_passthru": { 00:18:52.285 "identify_ctrlr": false 00:18:52.285 } 00:18:52.285 } 00:18:52.285 }, 00:18:52.285 { 00:18:52.285 "method": "nvmf_set_max_subsystems", 00:18:52.285 "params": { 00:18:52.285 "max_subsystems": 1024 00:18:52.285 } 00:18:52.285 }, 00:18:52.285 { 00:18:52.285 "method": "nvmf_set_crdt", 00:18:52.285 "params": { 00:18:52.285 "crdt1": 0, 00:18:52.285 "crdt2": 0, 00:18:52.285 "crdt3": 0 00:18:52.285 } 00:18:52.285 }, 00:18:52.285 { 00:18:52.285 "method": "nvmf_create_transport", 00:18:52.285 "params": { 00:18:52.285 "trtype": "TCP", 00:18:52.285 "max_queue_depth": 128, 00:18:52.285 "max_io_qpairs_per_ctrlr": 127, 00:18:52.285 "in_capsule_data_size": 4096, 00:18:52.285 "max_io_size": 131072, 00:18:52.285 "io_unit_size": 131072, 00:18:52.285 "max_aq_depth": 128, 00:18:52.285 "num_shared_buffers": 511, 00:18:52.285 "buf_cache_size": 4294967295, 00:18:52.285 "dif_insert_or_strip": false, 00:18:52.285 "zcopy": false, 00:18:52.285 "c2h_success": false, 00:18:52.285 "sock_priority": 0, 00:18:52.285 "abort_timeout_sec": 1, 00:18:52.285 "ack_timeout": 0, 00:18:52.285 "data_wr_pool_size": 0 00:18:52.285 } 00:18:52.285 }, 00:18:52.285 { 00:18:52.285 "method": "nvmf_create_subsystem", 00:18:52.285 "params": { 00:18:52.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.285 "allow_any_host": false, 00:18:52.285 "serial_number": "SPDK00000000000001", 00:18:52.285 "model_number": "SPDK bdev Controller", 00:18:52.285 "max_namespaces": 10, 00:18:52.285 "min_cntlid": 1, 00:18:52.285 "max_cntlid": 65519, 00:18:52.285 "ana_reporting": false 00:18:52.285 } 00:18:52.285 }, 00:18:52.285 { 00:18:52.285 "method": "nvmf_subsystem_add_host", 00:18:52.285 "params": { 00:18:52.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.285 "host": "nqn.2016-06.io.spdk:host1", 00:18:52.285 "psk": "/tmp/tmp.1Bh9UJOzOm" 00:18:52.285 } 00:18:52.285 }, 00:18:52.285 { 00:18:52.285 "method": "nvmf_subsystem_add_ns", 00:18:52.285 "params": { 00:18:52.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.285 "namespace": { 00:18:52.285 "nsid": 1, 00:18:52.285 "bdev_name": "malloc0", 00:18:52.285 "nguid": "7A3B21F1DD4B474ABA12041EFE1D4472", 00:18:52.285 "uuid": "7a3b21f1-dd4b-474a-ba12-041efe1d4472", 00:18:52.285 "no_auto_visible": false 00:18:52.285 } 00:18:52.285 } 00:18:52.285 }, 00:18:52.285 { 00:18:52.285 "method": "nvmf_subsystem_add_listener", 00:18:52.285 "params": { 00:18:52.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.285 "listen_address": { 00:18:52.285 "trtype": "TCP", 00:18:52.285 "adrfam": "IPv4", 00:18:52.285 "traddr": "10.0.0.2", 00:18:52.285 "trsvcid": "4420" 00:18:52.285 }, 00:18:52.285 "secure_channel": true 00:18:52.285 } 00:18:52.285 } 00:18:52.285 ] 00:18:52.285 } 00:18:52.285 ] 00:18:52.285 }' 00:18:52.285 17:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:52.545 17:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:52.545 "subsystems": [ 00:18:52.545 { 00:18:52.545 "subsystem": "keyring", 00:18:52.545 "config": [] 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "subsystem": "iobuf", 00:18:52.545 "config": [ 00:18:52.545 { 00:18:52.545 "method": "iobuf_set_options", 00:18:52.545 "params": { 00:18:52.545 "small_pool_count": 8192, 00:18:52.545 "large_pool_count": 1024, 00:18:52.545 "small_bufsize": 8192, 00:18:52.545 "large_bufsize": 135168 00:18:52.545 } 00:18:52.545 } 00:18:52.545 ] 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "subsystem": "sock", 00:18:52.545 "config": [ 00:18:52.545 { 00:18:52.545 "method": "sock_set_default_impl", 00:18:52.545 "params": { 00:18:52.545 "impl_name": "posix" 00:18:52.545 } 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "method": "sock_impl_set_options", 00:18:52.545 "params": { 00:18:52.545 "impl_name": "ssl", 00:18:52.545 "recv_buf_size": 4096, 00:18:52.545 "send_buf_size": 4096, 00:18:52.545 "enable_recv_pipe": true, 00:18:52.545 "enable_quickack": false, 00:18:52.545 "enable_placement_id": 0, 00:18:52.545 "enable_zerocopy_send_server": true, 00:18:52.545 "enable_zerocopy_send_client": false, 00:18:52.545 "zerocopy_threshold": 0, 00:18:52.545 "tls_version": 0, 00:18:52.545 "enable_ktls": false 00:18:52.545 } 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "method": "sock_impl_set_options", 00:18:52.545 "params": { 00:18:52.545 "impl_name": "posix", 00:18:52.545 "recv_buf_size": 2097152, 00:18:52.545 "send_buf_size": 2097152, 00:18:52.545 "enable_recv_pipe": true, 00:18:52.545 "enable_quickack": false, 00:18:52.545 "enable_placement_id": 0, 00:18:52.545 "enable_zerocopy_send_server": true, 00:18:52.545 "enable_zerocopy_send_client": false, 00:18:52.545 "zerocopy_threshold": 0, 00:18:52.545 "tls_version": 0, 00:18:52.545 "enable_ktls": false 00:18:52.545 } 00:18:52.545 } 00:18:52.545 ] 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "subsystem": "vmd", 00:18:52.545 "config": [] 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "subsystem": "accel", 00:18:52.545 "config": [ 00:18:52.545 { 00:18:52.545 "method": "accel_set_options", 00:18:52.545 "params": { 00:18:52.545 "small_cache_size": 128, 00:18:52.545 "large_cache_size": 16, 00:18:52.545 "task_count": 2048, 00:18:52.545 "sequence_count": 2048, 00:18:52.545 "buf_count": 2048 00:18:52.545 } 00:18:52.545 } 00:18:52.545 ] 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "subsystem": "bdev", 00:18:52.545 "config": [ 00:18:52.545 { 00:18:52.545 "method": "bdev_set_options", 00:18:52.545 "params": { 00:18:52.545 "bdev_io_pool_size": 65535, 00:18:52.545 "bdev_io_cache_size": 256, 00:18:52.545 "bdev_auto_examine": true, 00:18:52.545 "iobuf_small_cache_size": 128, 00:18:52.545 "iobuf_large_cache_size": 16 00:18:52.545 } 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "method": "bdev_raid_set_options", 00:18:52.545 "params": { 00:18:52.545 "process_window_size_kb": 1024 00:18:52.545 } 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "method": "bdev_iscsi_set_options", 00:18:52.545 "params": { 00:18:52.545 "timeout_sec": 30 00:18:52.545 } 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "method": "bdev_nvme_set_options", 00:18:52.545 "params": { 00:18:52.545 "action_on_timeout": "none", 00:18:52.545 "timeout_us": 0, 00:18:52.545 "timeout_admin_us": 0, 00:18:52.545 "keep_alive_timeout_ms": 10000, 00:18:52.545 "arbitration_burst": 0, 00:18:52.545 "low_priority_weight": 0, 00:18:52.545 "medium_priority_weight": 0, 00:18:52.545 "high_priority_weight": 0, 00:18:52.545 "nvme_adminq_poll_period_us": 10000, 00:18:52.545 "nvme_ioq_poll_period_us": 0, 00:18:52.545 "io_queue_requests": 512, 00:18:52.545 "delay_cmd_submit": true, 00:18:52.545 "transport_retry_count": 4, 00:18:52.545 "bdev_retry_count": 3, 00:18:52.545 "transport_ack_timeout": 0, 00:18:52.545 "ctrlr_loss_timeout_sec": 0, 00:18:52.545 "reconnect_delay_sec": 0, 00:18:52.545 "fast_io_fail_timeout_sec": 0, 00:18:52.545 "disable_auto_failback": false, 00:18:52.545 "generate_uuids": false, 00:18:52.545 "transport_tos": 0, 00:18:52.545 "nvme_error_stat": false, 00:18:52.545 "rdma_srq_size": 0, 00:18:52.545 "io_path_stat": false, 00:18:52.545 "allow_accel_sequence": false, 00:18:52.545 "rdma_max_cq_size": 0, 00:18:52.545 "rdma_cm_event_timeout_ms": 0, 00:18:52.545 "dhchap_digests": [ 00:18:52.545 "sha256", 00:18:52.545 "sha384", 00:18:52.545 "sha512" 00:18:52.545 ], 00:18:52.545 "dhchap_dhgroups": [ 00:18:52.545 "null", 00:18:52.545 "ffdhe2048", 00:18:52.545 "ffdhe3072", 00:18:52.545 "ffdhe4096", 00:18:52.545 "ffdhe6144", 00:18:52.545 "ffdhe8192" 00:18:52.545 ] 00:18:52.545 } 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "method": "bdev_nvme_attach_controller", 00:18:52.545 "params": { 00:18:52.545 "name": "TLSTEST", 00:18:52.545 "trtype": "TCP", 00:18:52.545 "adrfam": "IPv4", 00:18:52.545 "traddr": "10.0.0.2", 00:18:52.545 "trsvcid": "4420", 00:18:52.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.545 "prchk_reftag": false, 00:18:52.545 "prchk_guard": false, 00:18:52.545 "ctrlr_loss_timeout_sec": 0, 00:18:52.545 "reconnect_delay_sec": 0, 00:18:52.545 "fast_io_fail_timeout_sec": 0, 00:18:52.545 "psk": "/tmp/tmp.1Bh9UJOzOm", 00:18:52.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.545 "hdgst": false, 00:18:52.545 "ddgst": false 00:18:52.545 } 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "method": "bdev_nvme_set_hotplug", 00:18:52.545 "params": { 00:18:52.545 "period_us": 100000, 00:18:52.545 "enable": false 00:18:52.545 } 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "method": "bdev_wait_for_examine" 00:18:52.545 } 00:18:52.545 ] 00:18:52.545 }, 00:18:52.545 { 00:18:52.545 "subsystem": "nbd", 00:18:52.545 "config": [] 00:18:52.545 } 00:18:52.545 ] 00:18:52.545 }' 00:18:52.545 17:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 113155 00:18:52.546 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 113155 ']' 00:18:52.546 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 113155 00:18:52.546 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:52.546 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:52.546 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113155 00:18:52.546 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:52.546 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:52.546 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113155' 00:18:52.546 killing process with pid 113155 00:18:52.546 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 113155 00:18:52.546 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.546 00:18:52.546 Latency(us) 00:18:52.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.546 =================================================================================================================== 00:18:52.546 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:52.546 [2024-07-15 17:00:59.098313] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:52.546 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 113155 00:18:52.805 17:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 112894 00:18:52.805 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 112894 ']' 00:18:52.805 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 112894 00:18:52.805 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:52.805 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:52.805 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112894 00:18:52.805 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:52.805 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:52.805 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112894' 00:18:52.805 killing process with pid 112894 00:18:52.805 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 112894 00:18:52.805 [2024-07-15 17:00:59.326315] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:52.805 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 112894 00:18:53.063 17:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:53.064 17:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.064 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:53.064 17:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:53.064 "subsystems": [ 00:18:53.064 { 00:18:53.064 "subsystem": "keyring", 00:18:53.064 "config": [] 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "subsystem": "iobuf", 00:18:53.064 "config": [ 00:18:53.064 { 00:18:53.064 "method": "iobuf_set_options", 00:18:53.064 "params": { 00:18:53.064 "small_pool_count": 8192, 00:18:53.064 "large_pool_count": 1024, 00:18:53.064 "small_bufsize": 8192, 00:18:53.064 "large_bufsize": 135168 00:18:53.064 } 00:18:53.064 } 00:18:53.064 ] 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "subsystem": "sock", 00:18:53.064 "config": [ 00:18:53.064 { 00:18:53.064 "method": "sock_set_default_impl", 00:18:53.064 "params": { 00:18:53.064 "impl_name": "posix" 00:18:53.064 } 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "method": "sock_impl_set_options", 00:18:53.064 "params": { 00:18:53.064 "impl_name": "ssl", 00:18:53.064 "recv_buf_size": 4096, 00:18:53.064 "send_buf_size": 4096, 00:18:53.064 "enable_recv_pipe": true, 00:18:53.064 "enable_quickack": false, 00:18:53.064 "enable_placement_id": 0, 00:18:53.064 "enable_zerocopy_send_server": true, 00:18:53.064 "enable_zerocopy_send_client": false, 00:18:53.064 "zerocopy_threshold": 0, 00:18:53.064 "tls_version": 0, 00:18:53.064 "enable_ktls": false 00:18:53.064 } 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "method": "sock_impl_set_options", 00:18:53.064 "params": { 00:18:53.064 "impl_name": "posix", 00:18:53.064 "recv_buf_size": 2097152, 00:18:53.064 "send_buf_size": 2097152, 00:18:53.064 "enable_recv_pipe": true, 00:18:53.064 "enable_quickack": false, 00:18:53.064 "enable_placement_id": 0, 00:18:53.064 "enable_zerocopy_send_server": true, 00:18:53.064 "enable_zerocopy_send_client": false, 00:18:53.064 "zerocopy_threshold": 0, 00:18:53.064 "tls_version": 0, 00:18:53.064 "enable_ktls": false 00:18:53.064 } 00:18:53.064 } 00:18:53.064 ] 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "subsystem": "vmd", 00:18:53.064 "config": [] 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "subsystem": "accel", 00:18:53.064 "config": [ 00:18:53.064 { 00:18:53.064 "method": "accel_set_options", 00:18:53.064 "params": { 00:18:53.064 "small_cache_size": 128, 00:18:53.064 "large_cache_size": 16, 00:18:53.064 "task_count": 2048, 00:18:53.064 "sequence_count": 2048, 00:18:53.064 "buf_count": 2048 00:18:53.064 } 00:18:53.064 } 00:18:53.064 ] 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "subsystem": "bdev", 00:18:53.064 "config": [ 00:18:53.064 { 00:18:53.064 "method": "bdev_set_options", 00:18:53.064 "params": { 00:18:53.064 "bdev_io_pool_size": 65535, 00:18:53.064 "bdev_io_cache_size": 256, 00:18:53.064 "bdev_auto_examine": true, 00:18:53.064 "iobuf_small_cache_size": 128, 00:18:53.064 "iobuf_large_cache_size": 16 00:18:53.064 } 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "method": "bdev_raid_set_options", 00:18:53.064 "params": { 00:18:53.064 "process_window_size_kb": 1024 00:18:53.064 } 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "method": "bdev_iscsi_set_options", 00:18:53.064 "params": { 00:18:53.064 "timeout_sec": 30 00:18:53.064 } 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "method": "bdev_nvme_set_options", 00:18:53.064 "params": { 00:18:53.064 "action_on_timeout": "none", 00:18:53.064 "timeout_us": 0, 00:18:53.064 "timeout_admin_us": 0, 00:18:53.064 "keep_alive_timeout_ms": 10000, 00:18:53.064 "arbitration_burst": 0, 00:18:53.064 "low_priority_weight": 0, 00:18:53.064 "medium_priority_weight": 0, 00:18:53.064 "high_priority_weight": 0, 00:18:53.064 "nvme_adminq_poll_period_us": 10000, 00:18:53.064 "nvme_ioq_poll_period_us": 0, 00:18:53.064 "io_queue_requests": 0, 00:18:53.064 "delay_cmd_submit": true, 00:18:53.064 "transport_retry_count": 4, 00:18:53.064 "bdev_retry_count": 3, 00:18:53.064 "transport_ack_timeout": 0, 00:18:53.064 "ctrlr_loss_timeout_sec": 0, 00:18:53.064 "reconnect_delay_sec": 0, 00:18:53.064 "fast_io_fail_timeout_sec": 0, 00:18:53.064 "disable_auto_failback": false, 00:18:53.064 "generate_uuids": false, 00:18:53.064 "transport_tos": 0, 00:18:53.064 "nvme_error_stat": false, 00:18:53.064 "rdma_srq_size": 0, 00:18:53.064 "io_path_stat": false, 00:18:53.064 "allow_accel_sequence": false, 00:18:53.064 "rdma_max_cq_size": 0, 00:18:53.064 "rdma_cm_event_timeout_ms": 0, 00:18:53.064 "dhchap_digests": [ 00:18:53.064 "sha256", 00:18:53.064 "sha384", 00:18:53.064 "sha512" 00:18:53.064 ], 00:18:53.064 "dhchap_dhgroups": [ 00:18:53.064 "null", 00:18:53.064 "ffdhe2048", 00:18:53.064 "ffdhe3072", 00:18:53.064 "ffdhe4096", 00:18:53.064 "ffdhe6144", 00:18:53.064 "ffdhe8192" 00:18:53.064 ] 00:18:53.064 } 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "method": "bdev_nvme_set_hotplug", 00:18:53.064 "params": { 00:18:53.064 "period_us": 100000, 00:18:53.064 "enable": false 00:18:53.064 } 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "method": "bdev_malloc_create", 00:18:53.064 "params": { 00:18:53.064 "name": "malloc0", 00:18:53.064 "num_blocks": 8192, 00:18:53.064 "block_size": 4096, 00:18:53.064 "physical_block_size": 4096, 00:18:53.064 "uuid": "7a3b21f1-dd4b-474a-ba12-041efe1d4472", 00:18:53.064 "optimal_io_boundary": 0 00:18:53.064 } 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "method": "bdev_wait_for_examine" 00:18:53.064 } 00:18:53.064 ] 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "subsystem": "nbd", 00:18:53.064 "config": [] 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "subsystem": "scheduler", 00:18:53.064 "config": [ 00:18:53.064 { 00:18:53.064 "method": "framework_set_scheduler", 00:18:53.064 "params": { 00:18:53.064 "name": "static" 00:18:53.064 } 00:18:53.064 } 00:18:53.064 ] 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "subsystem": "nvmf", 00:18:53.064 "config": [ 00:18:53.064 { 00:18:53.064 "method": "nvmf_set_config", 00:18:53.064 "params": { 00:18:53.064 "discovery_filter": "match_any", 00:18:53.064 "admin_cmd_passthru": { 00:18:53.064 "identify_ctrlr": false 00:18:53.064 } 00:18:53.064 } 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "method": "nvmf_set_max_subsystems", 00:18:53.064 "params": { 00:18:53.064 "max_subsystems": 1024 00:18:53.064 } 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "method": "nvmf_set_crdt", 00:18:53.064 "params": { 00:18:53.064 "crdt1": 0, 00:18:53.064 "crdt2": 0, 00:18:53.064 "crdt3": 0 00:18:53.064 } 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "method": "nvmf_create_transport", 00:18:53.064 "params": { 00:18:53.064 "trtype": "TCP", 00:18:53.064 "max_queue_depth": 128, 00:18:53.064 "max_io_qpairs_per_ctrlr": 127, 00:18:53.064 "in_capsule_data_size": 4096, 00:18:53.064 "max_io_size": 131072, 00:18:53.064 "io_unit_size": 131072, 00:18:53.064 "max_aq_depth": 128, 00:18:53.064 "num_shared_buffers": 511, 00:18:53.064 "buf_cache_size": 4294967295, 00:18:53.064 "dif_insert_or_strip": false, 00:18:53.064 "zcopy": false, 00:18:53.064 "c2h_success": false, 00:18:53.064 "sock_priority": 0, 00:18:53.064 "abort_timeout_sec": 1, 00:18:53.064 "ack_timeout": 0, 00:18:53.064 "data_wr_pool_size": 0 00:18:53.064 } 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "method": "nvmf_create_subsystem", 00:18:53.064 "params": { 00:18:53.064 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.064 "allow_any_host": false, 00:18:53.064 "serial_number": "SPDK00000000000001", 00:18:53.064 "model_number": "SPDK bdev Controller", 00:18:53.064 "max_namespaces": 10, 00:18:53.064 "min_cntlid": 1, 00:18:53.064 "max_cntlid": 65519, 00:18:53.064 "ana_reporting": false 00:18:53.064 } 00:18:53.064 }, 00:18:53.064 { 00:18:53.064 "method": "nvmf_subsystem_add_host", 00:18:53.065 "params": { 00:18:53.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.065 "host": "nqn.2016-06.io.spdk:host1", 00:18:53.065 "psk": "/tmp/tmp.1Bh9UJOzOm" 00:18:53.065 } 00:18:53.065 }, 00:18:53.065 { 00:18:53.065 "method": "nvmf_subsystem_add_ns", 00:18:53.065 "params": { 00:18:53.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.065 "namespace": { 00:18:53.065 "nsid": 1, 00:18:53.065 "bdev_name": "malloc0", 00:18:53.065 "nguid": "7A3B21F1DD4B474ABA12041EFE1D4472", 00:18:53.065 "uuid": "7a3b21f1-dd4b-474a-ba12-041efe1d4472", 00:18:53.065 "no_auto_visible": false 00:18:53.065 } 00:18:53.065 } 00:18:53.065 }, 00:18:53.065 { 00:18:53.065 "method": "nvmf_subsystem_add_listener", 00:18:53.065 "params": { 00:18:53.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.065 "listen_address": { 00:18:53.065 "trtype": "TCP", 00:18:53.065 "adrfam": "IPv4", 00:18:53.065 "traddr": "10.0.0.2", 00:18:53.065 "trsvcid": "4420" 00:18:53.065 }, 00:18:53.065 "secure_channel": true 00:18:53.065 } 00:18:53.065 } 00:18:53.065 ] 00:18:53.065 } 00:18:53.065 ] 00:18:53.065 }' 00:18:53.065 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.065 17:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=113623 00:18:53.065 17:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:53.065 17:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 113623 00:18:53.065 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 113623 ']' 00:18:53.065 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.065 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.065 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.065 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.065 17:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.065 [2024-07-15 17:00:59.574837] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:53.065 [2024-07-15 17:00:59.574884] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.065 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.065 [2024-07-15 17:00:59.631783] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.065 [2024-07-15 17:00:59.709587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.065 [2024-07-15 17:00:59.709621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.065 [2024-07-15 17:00:59.709628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.065 [2024-07-15 17:00:59.709634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.065 [2024-07-15 17:00:59.709639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.065 [2024-07-15 17:00:59.709706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.324 [2024-07-15 17:00:59.913394] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.324 [2024-07-15 17:00:59.929370] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:53.324 [2024-07-15 17:00:59.945417] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:53.324 [2024-07-15 17:00:59.953356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=113669 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 113669 /var/tmp/bdevperf.sock 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 113669 ']' 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.891 17:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:53.891 "subsystems": [ 00:18:53.891 { 00:18:53.891 "subsystem": "keyring", 00:18:53.891 "config": [] 00:18:53.891 }, 00:18:53.891 { 00:18:53.891 "subsystem": "iobuf", 00:18:53.891 "config": [ 00:18:53.891 { 00:18:53.891 "method": "iobuf_set_options", 00:18:53.891 "params": { 00:18:53.891 "small_pool_count": 8192, 00:18:53.891 "large_pool_count": 1024, 00:18:53.891 "small_bufsize": 8192, 00:18:53.891 "large_bufsize": 135168 00:18:53.891 } 00:18:53.891 } 00:18:53.891 ] 00:18:53.891 }, 00:18:53.891 { 00:18:53.891 "subsystem": "sock", 00:18:53.891 "config": [ 00:18:53.891 { 00:18:53.891 "method": "sock_set_default_impl", 00:18:53.891 "params": { 00:18:53.891 "impl_name": "posix" 00:18:53.891 } 00:18:53.891 }, 00:18:53.891 { 00:18:53.891 "method": "sock_impl_set_options", 00:18:53.891 "params": { 00:18:53.891 "impl_name": "ssl", 00:18:53.891 "recv_buf_size": 4096, 00:18:53.891 "send_buf_size": 4096, 00:18:53.891 "enable_recv_pipe": true, 00:18:53.891 "enable_quickack": false, 00:18:53.891 "enable_placement_id": 0, 00:18:53.891 "enable_zerocopy_send_server": true, 00:18:53.891 "enable_zerocopy_send_client": false, 00:18:53.891 "zerocopy_threshold": 0, 00:18:53.891 "tls_version": 0, 00:18:53.891 "enable_ktls": false 00:18:53.891 } 00:18:53.891 }, 00:18:53.891 { 00:18:53.891 "method": "sock_impl_set_options", 00:18:53.891 "params": { 00:18:53.891 "impl_name": "posix", 00:18:53.891 "recv_buf_size": 2097152, 00:18:53.891 "send_buf_size": 2097152, 00:18:53.891 "enable_recv_pipe": true, 00:18:53.891 "enable_quickack": false, 00:18:53.891 "enable_placement_id": 0, 00:18:53.891 "enable_zerocopy_send_server": true, 00:18:53.891 "enable_zerocopy_send_client": false, 00:18:53.891 "zerocopy_threshold": 0, 00:18:53.891 "tls_version": 0, 00:18:53.891 "enable_ktls": false 00:18:53.891 } 00:18:53.891 } 00:18:53.891 ] 00:18:53.891 }, 00:18:53.891 { 00:18:53.891 "subsystem": "vmd", 00:18:53.891 "config": [] 00:18:53.891 }, 00:18:53.891 { 00:18:53.891 "subsystem": "accel", 00:18:53.891 "config": [ 00:18:53.891 { 00:18:53.891 "method": "accel_set_options", 00:18:53.891 "params": { 00:18:53.891 "small_cache_size": 128, 00:18:53.891 "large_cache_size": 16, 00:18:53.891 "task_count": 2048, 00:18:53.891 "sequence_count": 2048, 00:18:53.891 "buf_count": 2048 00:18:53.891 } 00:18:53.891 } 00:18:53.891 ] 00:18:53.891 }, 00:18:53.891 { 00:18:53.891 "subsystem": "bdev", 00:18:53.891 "config": [ 00:18:53.891 { 00:18:53.891 "method": "bdev_set_options", 00:18:53.891 "params": { 00:18:53.891 "bdev_io_pool_size": 65535, 00:18:53.891 "bdev_io_cache_size": 256, 00:18:53.891 "bdev_auto_examine": true, 00:18:53.891 "iobuf_small_cache_size": 128, 00:18:53.891 "iobuf_large_cache_size": 16 00:18:53.891 } 00:18:53.891 }, 00:18:53.891 { 00:18:53.891 "method": "bdev_raid_set_options", 00:18:53.891 "params": { 00:18:53.891 "process_window_size_kb": 1024 00:18:53.891 } 00:18:53.891 }, 00:18:53.891 { 00:18:53.891 "method": "bdev_iscsi_set_options", 00:18:53.891 "params": { 00:18:53.891 "timeout_sec": 30 00:18:53.891 } 00:18:53.891 }, 00:18:53.891 { 00:18:53.891 "method": "bdev_nvme_set_options", 00:18:53.891 "params": { 00:18:53.891 "action_on_timeout": "none", 00:18:53.891 "timeout_us": 0, 00:18:53.891 "timeout_admin_us": 0, 00:18:53.891 "keep_alive_timeout_ms": 10000, 00:18:53.891 "arbitration_burst": 0, 00:18:53.891 "low_priority_weight": 0, 00:18:53.892 "medium_priority_weight": 0, 00:18:53.892 "high_priority_weight": 0, 00:18:53.892 "nvme_adminq_poll_period_us": 10000, 00:18:53.892 "nvme_ioq_poll_period_us": 0, 00:18:53.892 "io_queue_requests": 512, 00:18:53.892 "delay_cmd_submit": true, 00:18:53.892 "transport_retry_count": 4, 00:18:53.892 "bdev_retry_count": 3, 00:18:53.892 "transport_ack_timeout": 0, 00:18:53.892 "ctrlr_loss_timeout_sec": 0, 00:18:53.892 "reconnect_delay_sec": 0, 00:18:53.892 "fast_io_fail_timeout_sec": 0, 00:18:53.892 "disable_auto_failback": false, 00:18:53.892 "generate_uuids": false, 00:18:53.892 "transport_tos": 0, 00:18:53.892 "nvme_error_stat": false, 00:18:53.892 "rdma_srq_size": 0, 00:18:53.892 "io_path_stat": false, 00:18:53.892 "allow_accel_sequence": false, 00:18:53.892 "rdma_max_cq_size": 0, 00:18:53.892 "rdma_cm_event_timeout_ms": 0, 00:18:53.892 "dhchap_digests": [ 00:18:53.892 "sha256", 00:18:53.892 "sha384", 00:18:53.892 "sha512" 00:18:53.892 ], 00:18:53.892 "dhchap_dhgroups": [ 00:18:53.892 "null", 00:18:53.892 "ffdhe2048", 00:18:53.892 "ffdhe3072", 00:18:53.892 "ffdhe4096", 00:18:53.892 "ffdhe6144", 00:18:53.892 "ffdhe8192" 00:18:53.892 ] 00:18:53.892 } 00:18:53.892 }, 00:18:53.892 { 00:18:53.892 "method": "bdev_nvme_attach_controller", 00:18:53.892 "params": { 00:18:53.892 "name": "TLSTEST", 00:18:53.892 "trtype": "TCP", 00:18:53.892 "adrfam": "IPv4", 00:18:53.892 "traddr": "10.0.0.2", 00:18:53.892 "trsvcid": "4420", 00:18:53.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.892 "prchk_reftag": false, 00:18:53.892 "prchk_guard": false, 00:18:53.892 "ctrlr_loss_timeout_sec": 0, 00:18:53.892 "reconnect_delay_sec": 0, 00:18:53.892 "fast_io_fail_timeout_sec": 0, 00:18:53.892 "psk": "/tmp/tmp.1Bh9UJOzOm", 00:18:53.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.892 "hdgst": false, 00:18:53.892 "ddgst": false 00:18:53.892 } 00:18:53.892 }, 00:18:53.892 { 00:18:53.892 "method": "bdev_nvme_set_hotplug", 00:18:53.892 "params": { 00:18:53.892 "period_us": 100000, 00:18:53.892 "enable": false 00:18:53.892 } 00:18:53.892 }, 00:18:53.892 { 00:18:53.892 "method": "bdev_wait_for_examine" 00:18:53.892 } 00:18:53.892 ] 00:18:53.892 }, 00:18:53.892 { 00:18:53.892 "subsystem": "nbd", 00:18:53.892 "config": [] 00:18:53.892 } 00:18:53.892 ] 00:18:53.892 }' 00:18:53.892 17:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.892 17:01:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.892 [2024-07-15 17:01:00.444355] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:18:53.892 [2024-07-15 17:01:00.444405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113669 ] 00:18:53.892 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.892 [2024-07-15 17:01:00.495840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.151 [2024-07-15 17:01:00.570035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.151 [2024-07-15 17:01:00.712787] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.151 [2024-07-15 17:01:00.712873] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:54.717 17:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.717 17:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:54.717 17:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:54.717 Running I/O for 10 seconds... 00:19:06.921 00:19:06.921 Latency(us) 00:19:06.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.921 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:06.921 Verification LBA range: start 0x0 length 0x2000 00:19:06.921 TLSTESTn1 : 10.01 5408.83 21.13 0.00 0.00 23628.06 6667.58 36244.26 00:19:06.921 =================================================================================================================== 00:19:06.921 Total : 5408.83 21.13 0.00 0.00 23628.06 6667.58 36244.26 00:19:06.921 0 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 113669 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 113669 ']' 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 113669 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113669 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113669' 00:19:06.921 killing process with pid 113669 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 113669 00:19:06.921 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.921 00:19:06.921 Latency(us) 00:19:06.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.921 =================================================================================================================== 00:19:06.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.921 [2024-07-15 17:01:11.420489] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 113669 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 113623 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 113623 ']' 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 113623 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113623 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113623' 00:19:06.921 killing process with pid 113623 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 113623 00:19:06.921 [2024-07-15 17:01:11.648076] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 113623 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=115616 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 115616 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 115616 ']' 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:06.921 17:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.921 [2024-07-15 17:01:11.899799] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:19:06.921 [2024-07-15 17:01:11.899848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.921 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.921 [2024-07-15 17:01:11.956485] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.921 [2024-07-15 17:01:12.027610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.921 [2024-07-15 17:01:12.027653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.921 [2024-07-15 17:01:12.027659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.921 [2024-07-15 17:01:12.027665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.921 [2024-07-15 17:01:12.027670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.921 [2024-07-15 17:01:12.027704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.921 17:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:06.921 17:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:06.921 17:01:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:06.921 17:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:06.921 17:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.921 17:01:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.921 17:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.1Bh9UJOzOm 00:19:06.921 17:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1Bh9UJOzOm 00:19:06.921 17:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:06.921 [2024-07-15 17:01:12.887414] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.921 17:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:06.921 17:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:06.921 [2024-07-15 17:01:13.216246] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.921 [2024-07-15 17:01:13.216462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.921 17:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:06.921 malloc0 00:19:06.921 17:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:06.921 17:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Bh9UJOzOm 00:19:07.180 [2024-07-15 17:01:13.693751] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:07.180 17:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:07.180 17:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=115968 00:19:07.180 17:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:07.180 17:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 115968 /var/tmp/bdevperf.sock 00:19:07.180 17:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 115968 ']' 00:19:07.180 17:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.180 17:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.180 17:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.180 17:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.180 17:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.180 [2024-07-15 17:01:13.734482] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:19:07.180 [2024-07-15 17:01:13.734538] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115968 ] 00:19:07.180 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.180 [2024-07-15 17:01:13.789092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.439 [2024-07-15 17:01:13.861960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.439 17:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:07.439 17:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:07.439 17:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1Bh9UJOzOm 00:19:07.697 17:01:14 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:07.697 [2024-07-15 17:01:14.299657] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.956 nvme0n1 00:19:07.956 17:01:14 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:07.956 Running I/O for 1 seconds... 00:19:08.894 00:19:08.894 Latency(us) 00:19:08.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.894 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:08.894 Verification LBA range: start 0x0 length 0x2000 00:19:08.894 nvme0n1 : 1.03 5093.70 19.90 0.00 0.00 24869.49 6838.54 68841.29 00:19:08.894 =================================================================================================================== 00:19:08.894 Total : 5093.70 19.90 0.00 0.00 24869.49 6838.54 68841.29 00:19:08.894 0 00:19:08.894 17:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 115968 00:19:08.894 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 115968 ']' 00:19:08.894 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 115968 00:19:08.894 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:08.894 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:08.894 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115968 00:19:08.894 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:09.154 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:09.154 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115968' 00:19:09.154 killing process with pid 115968 00:19:09.154 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 115968 00:19:09.154 Received shutdown signal, test time was about 1.000000 seconds 00:19:09.154 00:19:09.154 Latency(us) 00:19:09.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.155 =================================================================================================================== 00:19:09.155 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.155 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 115968 00:19:09.155 17:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 115616 00:19:09.155 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 115616 ']' 00:19:09.155 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 115616 00:19:09.155 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:09.155 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:09.155 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115616 00:19:09.155 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:09.155 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:09.155 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115616' 00:19:09.155 killing process with pid 115616 00:19:09.155 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 115616 00:19:09.155 [2024-07-15 17:01:15.791273] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:09.155 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 115616 00:19:09.414 17:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:19:09.414 17:01:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:09.414 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:09.414 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.414 17:01:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=116311 00:19:09.414 17:01:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:09.414 17:01:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 116311 00:19:09.414 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116311 ']' 00:19:09.414 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.414 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.414 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.414 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.414 17:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.414 [2024-07-15 17:01:16.042799] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:19:09.414 [2024-07-15 17:01:16.042848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.414 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.673 [2024-07-15 17:01:16.099909] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.673 [2024-07-15 17:01:16.171928] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.673 [2024-07-15 17:01:16.171972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.673 [2024-07-15 17:01:16.171978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.673 [2024-07-15 17:01:16.171984] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.673 [2024-07-15 17:01:16.171988] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.673 [2024-07-15 17:01:16.172021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.242 17:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.242 17:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:10.242 17:01:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:10.242 17:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:10.242 17:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.242 17:01:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.242 17:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:19:10.242 17:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.242 17:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.242 [2024-07-15 17:01:16.875542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.242 malloc0 00:19:10.242 [2024-07-15 17:01:16.903721] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:10.242 [2024-07-15 17:01:16.903915] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.501 17:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.501 17:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=116465 00:19:10.501 17:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 116465 /var/tmp/bdevperf.sock 00:19:10.501 17:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:10.501 17:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116465 ']' 00:19:10.501 17:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.501 17:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.501 17:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.501 17:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.501 17:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.501 [2024-07-15 17:01:16.975745] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:19:10.501 [2024-07-15 17:01:16.975787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116465 ] 00:19:10.501 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.501 [2024-07-15 17:01:17.029566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.501 [2024-07-15 17:01:17.108900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.436 17:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.436 17:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:11.436 17:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1Bh9UJOzOm 00:19:11.436 17:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:11.694 [2024-07-15 17:01:18.128523] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:11.694 nvme0n1 00:19:11.694 17:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:11.694 Running I/O for 1 seconds... 00:19:13.097 00:19:13.097 Latency(us) 00:19:13.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.097 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:13.097 Verification LBA range: start 0x0 length 0x2000 00:19:13.097 nvme0n1 : 1.02 5399.17 21.09 0.00 0.00 23470.40 6724.56 36472.21 00:19:13.097 =================================================================================================================== 00:19:13.097 Total : 5399.17 21.09 0.00 0.00 23470.40 6724.56 36472.21 00:19:13.097 0 00:19:13.097 17:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:19:13.097 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.097 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.097 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.097 17:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:19:13.097 "subsystems": [ 00:19:13.097 { 00:19:13.097 "subsystem": "keyring", 00:19:13.097 "config": [ 00:19:13.097 { 00:19:13.097 "method": "keyring_file_add_key", 00:19:13.097 "params": { 00:19:13.097 "name": "key0", 00:19:13.097 "path": "/tmp/tmp.1Bh9UJOzOm" 00:19:13.097 } 00:19:13.097 } 00:19:13.097 ] 00:19:13.097 }, 00:19:13.097 { 00:19:13.097 "subsystem": "iobuf", 00:19:13.097 "config": [ 00:19:13.097 { 00:19:13.097 "method": "iobuf_set_options", 00:19:13.097 "params": { 00:19:13.097 "small_pool_count": 8192, 00:19:13.097 "large_pool_count": 1024, 00:19:13.097 "small_bufsize": 8192, 00:19:13.097 "large_bufsize": 135168 00:19:13.097 } 00:19:13.097 } 00:19:13.097 ] 00:19:13.097 }, 00:19:13.097 { 00:19:13.097 "subsystem": "sock", 00:19:13.097 "config": [ 00:19:13.097 { 00:19:13.097 "method": "sock_set_default_impl", 00:19:13.097 "params": { 00:19:13.097 "impl_name": "posix" 00:19:13.097 } 00:19:13.097 }, 00:19:13.097 { 00:19:13.097 "method": "sock_impl_set_options", 00:19:13.097 "params": { 00:19:13.097 "impl_name": "ssl", 00:19:13.097 "recv_buf_size": 4096, 00:19:13.097 "send_buf_size": 4096, 00:19:13.097 "enable_recv_pipe": true, 00:19:13.097 "enable_quickack": false, 00:19:13.097 "enable_placement_id": 0, 00:19:13.097 "enable_zerocopy_send_server": true, 00:19:13.097 "enable_zerocopy_send_client": false, 00:19:13.097 "zerocopy_threshold": 0, 00:19:13.097 "tls_version": 0, 00:19:13.097 "enable_ktls": false 00:19:13.097 } 00:19:13.097 }, 00:19:13.097 { 00:19:13.097 "method": "sock_impl_set_options", 00:19:13.097 "params": { 00:19:13.097 "impl_name": "posix", 00:19:13.097 "recv_buf_size": 2097152, 00:19:13.097 "send_buf_size": 2097152, 00:19:13.097 "enable_recv_pipe": true, 00:19:13.097 "enable_quickack": false, 00:19:13.097 "enable_placement_id": 0, 00:19:13.097 "enable_zerocopy_send_server": true, 00:19:13.097 "enable_zerocopy_send_client": false, 00:19:13.097 "zerocopy_threshold": 0, 00:19:13.097 "tls_version": 0, 00:19:13.097 "enable_ktls": false 00:19:13.097 } 00:19:13.097 } 00:19:13.097 ] 00:19:13.097 }, 00:19:13.097 { 00:19:13.097 "subsystem": "vmd", 00:19:13.097 "config": [] 00:19:13.097 }, 00:19:13.097 { 00:19:13.098 "subsystem": "accel", 00:19:13.098 "config": [ 00:19:13.098 { 00:19:13.098 "method": "accel_set_options", 00:19:13.098 "params": { 00:19:13.098 "small_cache_size": 128, 00:19:13.098 "large_cache_size": 16, 00:19:13.098 "task_count": 2048, 00:19:13.098 "sequence_count": 2048, 00:19:13.098 "buf_count": 2048 00:19:13.098 } 00:19:13.098 } 00:19:13.098 ] 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "subsystem": "bdev", 00:19:13.098 "config": [ 00:19:13.098 { 00:19:13.098 "method": "bdev_set_options", 00:19:13.098 "params": { 00:19:13.098 "bdev_io_pool_size": 65535, 00:19:13.098 "bdev_io_cache_size": 256, 00:19:13.098 "bdev_auto_examine": true, 00:19:13.098 "iobuf_small_cache_size": 128, 00:19:13.098 "iobuf_large_cache_size": 16 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "bdev_raid_set_options", 00:19:13.098 "params": { 00:19:13.098 "process_window_size_kb": 1024 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "bdev_iscsi_set_options", 00:19:13.098 "params": { 00:19:13.098 "timeout_sec": 30 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "bdev_nvme_set_options", 00:19:13.098 "params": { 00:19:13.098 "action_on_timeout": "none", 00:19:13.098 "timeout_us": 0, 00:19:13.098 "timeout_admin_us": 0, 00:19:13.098 "keep_alive_timeout_ms": 10000, 00:19:13.098 "arbitration_burst": 0, 00:19:13.098 "low_priority_weight": 0, 00:19:13.098 "medium_priority_weight": 0, 00:19:13.098 "high_priority_weight": 0, 00:19:13.098 "nvme_adminq_poll_period_us": 10000, 00:19:13.098 "nvme_ioq_poll_period_us": 0, 00:19:13.098 "io_queue_requests": 0, 00:19:13.098 "delay_cmd_submit": true, 00:19:13.098 "transport_retry_count": 4, 00:19:13.098 "bdev_retry_count": 3, 00:19:13.098 "transport_ack_timeout": 0, 00:19:13.098 "ctrlr_loss_timeout_sec": 0, 00:19:13.098 "reconnect_delay_sec": 0, 00:19:13.098 "fast_io_fail_timeout_sec": 0, 00:19:13.098 "disable_auto_failback": false, 00:19:13.098 "generate_uuids": false, 00:19:13.098 "transport_tos": 0, 00:19:13.098 "nvme_error_stat": false, 00:19:13.098 "rdma_srq_size": 0, 00:19:13.098 "io_path_stat": false, 00:19:13.098 "allow_accel_sequence": false, 00:19:13.098 "rdma_max_cq_size": 0, 00:19:13.098 "rdma_cm_event_timeout_ms": 0, 00:19:13.098 "dhchap_digests": [ 00:19:13.098 "sha256", 00:19:13.098 "sha384", 00:19:13.098 "sha512" 00:19:13.098 ], 00:19:13.098 "dhchap_dhgroups": [ 00:19:13.098 "null", 00:19:13.098 "ffdhe2048", 00:19:13.098 "ffdhe3072", 00:19:13.098 "ffdhe4096", 00:19:13.098 "ffdhe6144", 00:19:13.098 "ffdhe8192" 00:19:13.098 ] 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "bdev_nvme_set_hotplug", 00:19:13.098 "params": { 00:19:13.098 "period_us": 100000, 00:19:13.098 "enable": false 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "bdev_malloc_create", 00:19:13.098 "params": { 00:19:13.098 "name": "malloc0", 00:19:13.098 "num_blocks": 8192, 00:19:13.098 "block_size": 4096, 00:19:13.098 "physical_block_size": 4096, 00:19:13.098 "uuid": "c376b2ae-bab4-4f8a-a126-3edf353f365b", 00:19:13.098 "optimal_io_boundary": 0 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "bdev_wait_for_examine" 00:19:13.098 } 00:19:13.098 ] 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "subsystem": "nbd", 00:19:13.098 "config": [] 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "subsystem": "scheduler", 00:19:13.098 "config": [ 00:19:13.098 { 00:19:13.098 "method": "framework_set_scheduler", 00:19:13.098 "params": { 00:19:13.098 "name": "static" 00:19:13.098 } 00:19:13.098 } 00:19:13.098 ] 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "subsystem": "nvmf", 00:19:13.098 "config": [ 00:19:13.098 { 00:19:13.098 "method": "nvmf_set_config", 00:19:13.098 "params": { 00:19:13.098 "discovery_filter": "match_any", 00:19:13.098 "admin_cmd_passthru": { 00:19:13.098 "identify_ctrlr": false 00:19:13.098 } 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "nvmf_set_max_subsystems", 00:19:13.098 "params": { 00:19:13.098 "max_subsystems": 1024 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "nvmf_set_crdt", 00:19:13.098 "params": { 00:19:13.098 "crdt1": 0, 00:19:13.098 "crdt2": 0, 00:19:13.098 "crdt3": 0 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "nvmf_create_transport", 00:19:13.098 "params": { 00:19:13.098 "trtype": "TCP", 00:19:13.098 "max_queue_depth": 128, 00:19:13.098 "max_io_qpairs_per_ctrlr": 127, 00:19:13.098 "in_capsule_data_size": 4096, 00:19:13.098 "max_io_size": 131072, 00:19:13.098 "io_unit_size": 131072, 00:19:13.098 "max_aq_depth": 128, 00:19:13.098 "num_shared_buffers": 511, 00:19:13.098 "buf_cache_size": 4294967295, 00:19:13.098 "dif_insert_or_strip": false, 00:19:13.098 "zcopy": false, 00:19:13.098 "c2h_success": false, 00:19:13.098 "sock_priority": 0, 00:19:13.098 "abort_timeout_sec": 1, 00:19:13.098 "ack_timeout": 0, 00:19:13.098 "data_wr_pool_size": 0 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "nvmf_create_subsystem", 00:19:13.098 "params": { 00:19:13.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.098 "allow_any_host": false, 00:19:13.098 "serial_number": "00000000000000000000", 00:19:13.098 "model_number": "SPDK bdev Controller", 00:19:13.098 "max_namespaces": 32, 00:19:13.098 "min_cntlid": 1, 00:19:13.098 "max_cntlid": 65519, 00:19:13.098 "ana_reporting": false 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "nvmf_subsystem_add_host", 00:19:13.098 "params": { 00:19:13.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.098 "host": "nqn.2016-06.io.spdk:host1", 00:19:13.098 "psk": "key0" 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "nvmf_subsystem_add_ns", 00:19:13.098 "params": { 00:19:13.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.098 "namespace": { 00:19:13.098 "nsid": 1, 00:19:13.098 "bdev_name": "malloc0", 00:19:13.098 "nguid": "C376B2AEBAB44F8AA1263EDF353F365B", 00:19:13.098 "uuid": "c376b2ae-bab4-4f8a-a126-3edf353f365b", 00:19:13.098 "no_auto_visible": false 00:19:13.098 } 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "nvmf_subsystem_add_listener", 00:19:13.098 "params": { 00:19:13.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.098 "listen_address": { 00:19:13.098 "trtype": "TCP", 00:19:13.098 "adrfam": "IPv4", 00:19:13.098 "traddr": "10.0.0.2", 00:19:13.098 "trsvcid": "4420" 00:19:13.098 }, 00:19:13.098 "secure_channel": true 00:19:13.098 } 00:19:13.098 } 00:19:13.098 ] 00:19:13.098 } 00:19:13.098 ] 00:19:13.098 }' 00:19:13.098 17:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:13.098 17:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:19:13.098 "subsystems": [ 00:19:13.098 { 00:19:13.098 "subsystem": "keyring", 00:19:13.098 "config": [ 00:19:13.098 { 00:19:13.098 "method": "keyring_file_add_key", 00:19:13.098 "params": { 00:19:13.098 "name": "key0", 00:19:13.098 "path": "/tmp/tmp.1Bh9UJOzOm" 00:19:13.098 } 00:19:13.098 } 00:19:13.098 ] 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "subsystem": "iobuf", 00:19:13.098 "config": [ 00:19:13.098 { 00:19:13.098 "method": "iobuf_set_options", 00:19:13.098 "params": { 00:19:13.098 "small_pool_count": 8192, 00:19:13.098 "large_pool_count": 1024, 00:19:13.098 "small_bufsize": 8192, 00:19:13.098 "large_bufsize": 135168 00:19:13.098 } 00:19:13.098 } 00:19:13.098 ] 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "subsystem": "sock", 00:19:13.098 "config": [ 00:19:13.098 { 00:19:13.098 "method": "sock_set_default_impl", 00:19:13.098 "params": { 00:19:13.098 "impl_name": "posix" 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "sock_impl_set_options", 00:19:13.098 "params": { 00:19:13.098 "impl_name": "ssl", 00:19:13.098 "recv_buf_size": 4096, 00:19:13.098 "send_buf_size": 4096, 00:19:13.098 "enable_recv_pipe": true, 00:19:13.098 "enable_quickack": false, 00:19:13.098 "enable_placement_id": 0, 00:19:13.098 "enable_zerocopy_send_server": true, 00:19:13.098 "enable_zerocopy_send_client": false, 00:19:13.098 "zerocopy_threshold": 0, 00:19:13.098 "tls_version": 0, 00:19:13.098 "enable_ktls": false 00:19:13.098 } 00:19:13.098 }, 00:19:13.098 { 00:19:13.098 "method": "sock_impl_set_options", 00:19:13.098 "params": { 00:19:13.098 "impl_name": "posix", 00:19:13.098 "recv_buf_size": 2097152, 00:19:13.098 "send_buf_size": 2097152, 00:19:13.098 "enable_recv_pipe": true, 00:19:13.098 "enable_quickack": false, 00:19:13.098 "enable_placement_id": 0, 00:19:13.098 "enable_zerocopy_send_server": true, 00:19:13.098 "enable_zerocopy_send_client": false, 00:19:13.098 "zerocopy_threshold": 0, 00:19:13.098 "tls_version": 0, 00:19:13.099 "enable_ktls": false 00:19:13.099 } 00:19:13.099 } 00:19:13.099 ] 00:19:13.099 }, 00:19:13.099 { 00:19:13.099 "subsystem": "vmd", 00:19:13.099 "config": [] 00:19:13.099 }, 00:19:13.099 { 00:19:13.099 "subsystem": "accel", 00:19:13.099 "config": [ 00:19:13.099 { 00:19:13.099 "method": "accel_set_options", 00:19:13.099 "params": { 00:19:13.099 "small_cache_size": 128, 00:19:13.099 "large_cache_size": 16, 00:19:13.099 "task_count": 2048, 00:19:13.099 "sequence_count": 2048, 00:19:13.099 "buf_count": 2048 00:19:13.099 } 00:19:13.099 } 00:19:13.099 ] 00:19:13.099 }, 00:19:13.099 { 00:19:13.099 "subsystem": "bdev", 00:19:13.099 "config": [ 00:19:13.099 { 00:19:13.099 "method": "bdev_set_options", 00:19:13.099 "params": { 00:19:13.099 "bdev_io_pool_size": 65535, 00:19:13.099 "bdev_io_cache_size": 256, 00:19:13.099 "bdev_auto_examine": true, 00:19:13.099 "iobuf_small_cache_size": 128, 00:19:13.099 "iobuf_large_cache_size": 16 00:19:13.099 } 00:19:13.099 }, 00:19:13.099 { 00:19:13.099 "method": "bdev_raid_set_options", 00:19:13.099 "params": { 00:19:13.099 "process_window_size_kb": 1024 00:19:13.099 } 00:19:13.099 }, 00:19:13.099 { 00:19:13.099 "method": "bdev_iscsi_set_options", 00:19:13.099 "params": { 00:19:13.099 "timeout_sec": 30 00:19:13.099 } 00:19:13.099 }, 00:19:13.099 { 00:19:13.099 "method": "bdev_nvme_set_options", 00:19:13.099 "params": { 00:19:13.099 "action_on_timeout": "none", 00:19:13.099 "timeout_us": 0, 00:19:13.099 "timeout_admin_us": 0, 00:19:13.099 "keep_alive_timeout_ms": 10000, 00:19:13.099 "arbitration_burst": 0, 00:19:13.099 "low_priority_weight": 0, 00:19:13.099 "medium_priority_weight": 0, 00:19:13.099 "high_priority_weight": 0, 00:19:13.099 "nvme_adminq_poll_period_us": 10000, 00:19:13.099 "nvme_ioq_poll_period_us": 0, 00:19:13.099 "io_queue_requests": 512, 00:19:13.099 "delay_cmd_submit": true, 00:19:13.099 "transport_retry_count": 4, 00:19:13.099 "bdev_retry_count": 3, 00:19:13.099 "transport_ack_timeout": 0, 00:19:13.099 "ctrlr_loss_timeout_sec": 0, 00:19:13.099 "reconnect_delay_sec": 0, 00:19:13.099 "fast_io_fail_timeout_sec": 0, 00:19:13.099 "disable_auto_failback": false, 00:19:13.099 "generate_uuids": false, 00:19:13.099 "transport_tos": 0, 00:19:13.099 "nvme_error_stat": false, 00:19:13.099 "rdma_srq_size": 0, 00:19:13.099 "io_path_stat": false, 00:19:13.099 "allow_accel_sequence": false, 00:19:13.099 "rdma_max_cq_size": 0, 00:19:13.099 "rdma_cm_event_timeout_ms": 0, 00:19:13.099 "dhchap_digests": [ 00:19:13.099 "sha256", 00:19:13.099 "sha384", 00:19:13.099 "sha512" 00:19:13.099 ], 00:19:13.099 "dhchap_dhgroups": [ 00:19:13.099 "null", 00:19:13.099 "ffdhe2048", 00:19:13.099 "ffdhe3072", 00:19:13.099 "ffdhe4096", 00:19:13.099 "ffdhe6144", 00:19:13.099 "ffdhe8192" 00:19:13.099 ] 00:19:13.099 } 00:19:13.099 }, 00:19:13.099 { 00:19:13.099 "method": "bdev_nvme_attach_controller", 00:19:13.099 "params": { 00:19:13.099 "name": "nvme0", 00:19:13.099 "trtype": "TCP", 00:19:13.099 "adrfam": "IPv4", 00:19:13.099 "traddr": "10.0.0.2", 00:19:13.099 "trsvcid": "4420", 00:19:13.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.099 "prchk_reftag": false, 00:19:13.099 "prchk_guard": false, 00:19:13.099 "ctrlr_loss_timeout_sec": 0, 00:19:13.099 "reconnect_delay_sec": 0, 00:19:13.099 "fast_io_fail_timeout_sec": 0, 00:19:13.099 "psk": "key0", 00:19:13.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.099 "hdgst": false, 00:19:13.099 "ddgst": false 00:19:13.099 } 00:19:13.099 }, 00:19:13.099 { 00:19:13.099 "method": "bdev_nvme_set_hotplug", 00:19:13.099 "params": { 00:19:13.099 "period_us": 100000, 00:19:13.099 "enable": false 00:19:13.099 } 00:19:13.099 }, 00:19:13.099 { 00:19:13.099 "method": "bdev_enable_histogram", 00:19:13.099 "params": { 00:19:13.099 "name": "nvme0n1", 00:19:13.099 "enable": true 00:19:13.099 } 00:19:13.099 }, 00:19:13.099 { 00:19:13.099 "method": "bdev_wait_for_examine" 00:19:13.099 } 00:19:13.099 ] 00:19:13.099 }, 00:19:13.099 { 00:19:13.099 "subsystem": "nbd", 00:19:13.099 "config": [] 00:19:13.099 } 00:19:13.099 ] 00:19:13.099 }' 00:19:13.099 17:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 116465 00:19:13.099 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116465 ']' 00:19:13.099 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116465 00:19:13.099 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:13.099 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:13.099 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116465 00:19:13.099 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:13.099 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:13.099 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116465' 00:19:13.099 killing process with pid 116465 00:19:13.099 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116465 00:19:13.099 Received shutdown signal, test time was about 1.000000 seconds 00:19:13.099 00:19:13.099 Latency(us) 00:19:13.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.099 =================================================================================================================== 00:19:13.099 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:13.099 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116465 00:19:13.409 17:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 116311 00:19:13.409 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116311 ']' 00:19:13.409 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116311 00:19:13.409 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:13.409 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:13.409 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116311 00:19:13.409 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:13.409 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:13.409 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116311' 00:19:13.409 killing process with pid 116311 00:19:13.409 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116311 00:19:13.409 17:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116311 00:19:13.669 17:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:19:13.670 17:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:13.670 17:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:13.670 17:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:19:13.670 "subsystems": [ 00:19:13.670 { 00:19:13.670 "subsystem": "keyring", 00:19:13.670 "config": [ 00:19:13.670 { 00:19:13.670 "method": "keyring_file_add_key", 00:19:13.670 "params": { 00:19:13.670 "name": "key0", 00:19:13.670 "path": "/tmp/tmp.1Bh9UJOzOm" 00:19:13.670 } 00:19:13.670 } 00:19:13.670 ] 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "subsystem": "iobuf", 00:19:13.670 "config": [ 00:19:13.670 { 00:19:13.670 "method": "iobuf_set_options", 00:19:13.670 "params": { 00:19:13.670 "small_pool_count": 8192, 00:19:13.670 "large_pool_count": 1024, 00:19:13.670 "small_bufsize": 8192, 00:19:13.670 "large_bufsize": 135168 00:19:13.670 } 00:19:13.670 } 00:19:13.670 ] 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "subsystem": "sock", 00:19:13.670 "config": [ 00:19:13.670 { 00:19:13.670 "method": "sock_set_default_impl", 00:19:13.670 "params": { 00:19:13.670 "impl_name": "posix" 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "sock_impl_set_options", 00:19:13.670 "params": { 00:19:13.670 "impl_name": "ssl", 00:19:13.670 "recv_buf_size": 4096, 00:19:13.670 "send_buf_size": 4096, 00:19:13.670 "enable_recv_pipe": true, 00:19:13.670 "enable_quickack": false, 00:19:13.670 "enable_placement_id": 0, 00:19:13.670 "enable_zerocopy_send_server": true, 00:19:13.670 "enable_zerocopy_send_client": false, 00:19:13.670 "zerocopy_threshold": 0, 00:19:13.670 "tls_version": 0, 00:19:13.670 "enable_ktls": false 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "sock_impl_set_options", 00:19:13.670 "params": { 00:19:13.670 "impl_name": "posix", 00:19:13.670 "recv_buf_size": 2097152, 00:19:13.670 "send_buf_size": 2097152, 00:19:13.670 "enable_recv_pipe": true, 00:19:13.670 "enable_quickack": false, 00:19:13.670 "enable_placement_id": 0, 00:19:13.670 "enable_zerocopy_send_server": true, 00:19:13.670 "enable_zerocopy_send_client": false, 00:19:13.670 "zerocopy_threshold": 0, 00:19:13.670 "tls_version": 0, 00:19:13.670 "enable_ktls": false 00:19:13.670 } 00:19:13.670 } 00:19:13.670 ] 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "subsystem": "vmd", 00:19:13.670 "config": [] 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "subsystem": "accel", 00:19:13.670 "config": [ 00:19:13.670 { 00:19:13.670 "method": "accel_set_options", 00:19:13.670 "params": { 00:19:13.670 "small_cache_size": 128, 00:19:13.670 "large_cache_size": 16, 00:19:13.670 "task_count": 2048, 00:19:13.670 "sequence_count": 2048, 00:19:13.670 "buf_count": 2048 00:19:13.670 } 00:19:13.670 } 00:19:13.670 ] 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "subsystem": "bdev", 00:19:13.670 "config": [ 00:19:13.670 { 00:19:13.670 "method": "bdev_set_options", 00:19:13.670 "params": { 00:19:13.670 "bdev_io_pool_size": 65535, 00:19:13.670 "bdev_io_cache_size": 256, 00:19:13.670 "bdev_auto_examine": true, 00:19:13.670 "iobuf_small_cache_size": 128, 00:19:13.670 "iobuf_large_cache_size": 16 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "bdev_raid_set_options", 00:19:13.670 "params": { 00:19:13.670 "process_window_size_kb": 1024 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "bdev_iscsi_set_options", 00:19:13.670 "params": { 00:19:13.670 "timeout_sec": 30 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "bdev_nvme_set_options", 00:19:13.670 "params": { 00:19:13.670 "action_on_timeout": "none", 00:19:13.670 "timeout_us": 0, 00:19:13.670 "timeout_admin_us": 0, 00:19:13.670 "keep_alive_timeout_ms": 10000, 00:19:13.670 "arbitration_burst": 0, 00:19:13.670 "low_priority_weight": 0, 00:19:13.670 "medium_priority_weight": 0, 00:19:13.670 "high_priority_weight": 0, 00:19:13.670 "nvme_adminq_poll_period_us": 10000, 00:19:13.670 "nvme_ioq_poll_period_us": 0, 00:19:13.670 "io_queue_requests": 0, 00:19:13.670 "delay_cmd_submit": true, 00:19:13.670 "transport_retry_count": 4, 00:19:13.670 "bdev_retry_count": 3, 00:19:13.670 "transport_ack_timeout": 0, 00:19:13.670 "ctrlr_loss_timeout_sec": 0, 00:19:13.670 "reconnect_delay_sec": 0, 00:19:13.670 "fast_io_fail_timeout_sec": 0, 00:19:13.670 "disable_auto_failback": false, 00:19:13.670 "generate_uuids": false, 00:19:13.670 "transport_tos": 0, 00:19:13.670 "nvme_error_stat": false, 00:19:13.670 "rdma_srq_size": 0, 00:19:13.670 "io_path_stat": false, 00:19:13.670 "allow_accel_sequence": false, 00:19:13.670 "rdma_max_cq_size": 0, 00:19:13.670 "rdma_cm_event_timeout_ms": 0, 00:19:13.670 "dhchap_digests": [ 00:19:13.670 "sha256", 00:19:13.670 "sha384", 00:19:13.670 "sha512" 00:19:13.670 ], 00:19:13.670 "dhchap_dhgroups": [ 00:19:13.670 "null", 00:19:13.670 "ffdhe2048", 00:19:13.670 "ffdhe3072", 00:19:13.670 "ffdhe4096", 00:19:13.670 "ffdhe6144", 00:19:13.670 "ffdhe8192" 00:19:13.670 ] 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "bdev_nvme_set_hotplug", 00:19:13.670 "params": { 00:19:13.670 "period_us": 100000, 00:19:13.670 "enable": false 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "bdev_malloc_create", 00:19:13.670 "params": { 00:19:13.670 "name": "malloc0", 00:19:13.670 "num_blocks": 8192, 00:19:13.670 "block_size": 4096, 00:19:13.670 "physical_block_size": 4096, 00:19:13.670 "uuid": "c376b2ae-bab4-4f8a-a126-3edf353f365b", 00:19:13.670 "optimal_io_boundary": 0 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "bdev_wait_for_examine" 00:19:13.670 } 00:19:13.670 ] 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "subsystem": "nbd", 00:19:13.670 "config": [] 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "subsystem": "scheduler", 00:19:13.670 "config": [ 00:19:13.670 { 00:19:13.670 "method": "framework_set_scheduler", 00:19:13.670 "params": { 00:19:13.670 "name": "static" 00:19:13.670 } 00:19:13.670 } 00:19:13.670 ] 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "subsystem": "nvmf", 00:19:13.670 "config": [ 00:19:13.670 { 00:19:13.670 "method": "nvmf_set_config", 00:19:13.670 "params": { 00:19:13.670 "discovery_filter": "match_any", 00:19:13.670 "admin_cmd_passthru": { 00:19:13.670 "identify_ctrlr": false 00:19:13.670 } 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "nvmf_set_max_subsystems", 00:19:13.670 "params": { 00:19:13.670 "max_subsystems": 1024 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "nvmf_set_crdt", 00:19:13.670 "params": { 00:19:13.670 "crdt1": 0, 00:19:13.670 "crdt2": 0, 00:19:13.670 "crdt3": 0 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "nvmf_create_transport", 00:19:13.670 "params": { 00:19:13.670 "trtype": "TCP", 00:19:13.670 "max_queue_depth": 128, 00:19:13.670 "max_io_qpairs_per_ctrlr": 127, 00:19:13.670 "in_capsule_data_size": 4096, 00:19:13.670 "max_io_size": 131072, 00:19:13.670 "io_unit_size": 131072, 00:19:13.670 "max_aq_depth": 128, 00:19:13.670 "num_shared_buffers": 511, 00:19:13.670 "buf_cache_size": 4294967295, 00:19:13.670 "dif_insert_or_strip": false, 00:19:13.670 "zcopy": false, 00:19:13.670 "c2h_success": false, 00:19:13.670 "sock_priority": 0, 00:19:13.670 "abort_timeout_sec": 1, 00:19:13.670 "ack_timeout": 0, 00:19:13.670 "data_wr_pool_size": 0 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "nvmf_create_subsystem", 00:19:13.670 "params": { 00:19:13.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.670 "allow_any_host": false, 00:19:13.670 "serial_number": "00000000000000000000", 00:19:13.670 "model_number": "SPDK bdev Controller", 00:19:13.670 "max_namespaces": 32, 00:19:13.670 "min_cntlid": 1, 00:19:13.670 "max_cntlid": 65519, 00:19:13.670 "ana_reporting": false 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "nvmf_subsystem_add_host", 00:19:13.670 "params": { 00:19:13.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.670 "host": "nqn.2016-06.io.spdk:host1", 00:19:13.670 "psk": "key0" 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "nvmf_subsystem_add_ns", 00:19:13.670 "params": { 00:19:13.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.670 "namespace": { 00:19:13.670 "nsid": 1, 00:19:13.670 "bdev_name": "malloc0", 00:19:13.670 "nguid": "C376B2AEBAB44F8AA1263EDF353F365B", 00:19:13.670 "uuid": "c376b2ae-bab4-4f8a-a126-3edf353f365b", 00:19:13.670 "no_auto_visible": false 00:19:13.670 } 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "nvmf_subsystem_add_listener", 00:19:13.670 "params": { 00:19:13.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.670 "listen_address": { 00:19:13.670 "trtype": "TCP", 00:19:13.670 "adrfam": "IPv4", 00:19:13.670 "traddr": "10.0.0.2", 00:19:13.670 "trsvcid": "4420" 00:19:13.670 }, 00:19:13.670 "secure_channel": true 00:19:13.670 } 00:19:13.670 } 00:19:13.670 ] 00:19:13.670 } 00:19:13.670 ] 00:19:13.670 }' 00:19:13.670 17:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.671 17:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=117032 00:19:13.671 17:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:13.671 17:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 117032 00:19:13.671 17:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 117032 ']' 00:19:13.671 17:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.671 17:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.671 17:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.671 17:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.671 17:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.671 [2024-07-15 17:01:20.217183] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:19:13.671 [2024-07-15 17:01:20.217240] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.671 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.671 [2024-07-15 17:01:20.276127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.930 [2024-07-15 17:01:20.356005] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.930 [2024-07-15 17:01:20.356041] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.930 [2024-07-15 17:01:20.356048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.930 [2024-07-15 17:01:20.356054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.930 [2024-07-15 17:01:20.356059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.930 [2024-07-15 17:01:20.356134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.930 [2024-07-15 17:01:20.566155] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.930 [2024-07-15 17:01:20.598189] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.189 [2024-07-15 17:01:20.615540] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=117193 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 117193 /var/tmp/bdevperf.sock 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 117193 ']' 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.449 17:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:19:14.449 "subsystems": [ 00:19:14.449 { 00:19:14.449 "subsystem": "keyring", 00:19:14.449 "config": [ 00:19:14.449 { 00:19:14.449 "method": "keyring_file_add_key", 00:19:14.449 "params": { 00:19:14.449 "name": "key0", 00:19:14.449 "path": "/tmp/tmp.1Bh9UJOzOm" 00:19:14.449 } 00:19:14.449 } 00:19:14.449 ] 00:19:14.449 }, 00:19:14.449 { 00:19:14.449 "subsystem": "iobuf", 00:19:14.449 "config": [ 00:19:14.450 { 00:19:14.450 "method": "iobuf_set_options", 00:19:14.450 "params": { 00:19:14.450 "small_pool_count": 8192, 00:19:14.450 "large_pool_count": 1024, 00:19:14.450 "small_bufsize": 8192, 00:19:14.450 "large_bufsize": 135168 00:19:14.450 } 00:19:14.450 } 00:19:14.450 ] 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "subsystem": "sock", 00:19:14.450 "config": [ 00:19:14.450 { 00:19:14.450 "method": "sock_set_default_impl", 00:19:14.450 "params": { 00:19:14.450 "impl_name": "posix" 00:19:14.450 } 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "method": "sock_impl_set_options", 00:19:14.450 "params": { 00:19:14.450 "impl_name": "ssl", 00:19:14.450 "recv_buf_size": 4096, 00:19:14.450 "send_buf_size": 4096, 00:19:14.450 "enable_recv_pipe": true, 00:19:14.450 "enable_quickack": false, 00:19:14.450 "enable_placement_id": 0, 00:19:14.450 "enable_zerocopy_send_server": true, 00:19:14.450 "enable_zerocopy_send_client": false, 00:19:14.450 "zerocopy_threshold": 0, 00:19:14.450 "tls_version": 0, 00:19:14.450 "enable_ktls": false 00:19:14.450 } 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "method": "sock_impl_set_options", 00:19:14.450 "params": { 00:19:14.450 "impl_name": "posix", 00:19:14.450 "recv_buf_size": 2097152, 00:19:14.450 "send_buf_size": 2097152, 00:19:14.450 "enable_recv_pipe": true, 00:19:14.450 "enable_quickack": false, 00:19:14.450 "enable_placement_id": 0, 00:19:14.450 "enable_zerocopy_send_server": true, 00:19:14.450 "enable_zerocopy_send_client": false, 00:19:14.450 "zerocopy_threshold": 0, 00:19:14.450 "tls_version": 0, 00:19:14.450 "enable_ktls": false 00:19:14.450 } 00:19:14.450 } 00:19:14.450 ] 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "subsystem": "vmd", 00:19:14.450 "config": [] 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "subsystem": "accel", 00:19:14.450 "config": [ 00:19:14.450 { 00:19:14.450 "method": "accel_set_options", 00:19:14.450 "params": { 00:19:14.450 "small_cache_size": 128, 00:19:14.450 "large_cache_size": 16, 00:19:14.450 "task_count": 2048, 00:19:14.450 "sequence_count": 2048, 00:19:14.450 "buf_count": 2048 00:19:14.450 } 00:19:14.450 } 00:19:14.450 ] 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "subsystem": "bdev", 00:19:14.450 "config": [ 00:19:14.450 { 00:19:14.450 "method": "bdev_set_options", 00:19:14.450 "params": { 00:19:14.450 "bdev_io_pool_size": 65535, 00:19:14.450 "bdev_io_cache_size": 256, 00:19:14.450 "bdev_auto_examine": true, 00:19:14.450 "iobuf_small_cache_size": 128, 00:19:14.450 "iobuf_large_cache_size": 16 00:19:14.450 } 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "method": "bdev_raid_set_options", 00:19:14.450 "params": { 00:19:14.450 "process_window_size_kb": 1024 00:19:14.450 } 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "method": "bdev_iscsi_set_options", 00:19:14.450 "params": { 00:19:14.450 "timeout_sec": 30 00:19:14.450 } 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "method": "bdev_nvme_set_options", 00:19:14.450 "params": { 00:19:14.450 "action_on_timeout": "none", 00:19:14.450 "timeout_us": 0, 00:19:14.450 "timeout_admin_us": 0, 00:19:14.450 "keep_alive_timeout_ms": 10000, 00:19:14.450 "arbitration_burst": 0, 00:19:14.450 "low_priority_weight": 0, 00:19:14.450 "medium_priority_weight": 0, 00:19:14.450 "high_priority_weight": 0, 00:19:14.450 "nvme_adminq_poll_period_us": 10000, 00:19:14.450 "nvme_ioq_poll_period_us": 0, 00:19:14.450 "io_queue_requests": 512, 00:19:14.450 "delay_cmd_submit": true, 00:19:14.450 "transport_retry_count": 4, 00:19:14.450 "bdev_retry_count": 3, 00:19:14.450 "transport_ack_timeout": 0, 00:19:14.450 "ctrlr_loss_timeout_sec": 0, 00:19:14.450 "reconnect_delay_sec": 0, 00:19:14.450 "fast_io_fail_timeout_sec": 0, 00:19:14.450 "disable_auto_failback": false, 00:19:14.450 "generate_uuids": false, 00:19:14.450 "transport_tos": 0, 00:19:14.450 "nvme_error_stat": false, 00:19:14.450 "rdma_srq_size": 0, 00:19:14.450 "io_path_stat": false, 00:19:14.450 "allow_accel_sequence": false, 00:19:14.450 "rdma_max_cq_size": 0, 00:19:14.450 "rdma_cm_event_timeout_ms": 0, 00:19:14.450 "dhchap_digests": [ 00:19:14.450 "sha256", 00:19:14.450 "sha384", 00:19:14.450 "sha512" 00:19:14.450 ], 00:19:14.450 "dhchap_dhgroups": [ 00:19:14.450 "null", 00:19:14.450 "ffdhe2048", 00:19:14.450 "ffdhe3072", 00:19:14.450 "ffdhe4096", 00:19:14.450 "ffdhe6144", 00:19:14.450 "ffdhe8192" 00:19:14.450 ] 00:19:14.450 } 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "method": "bdev_nvme_attach_controller", 00:19:14.450 "params": { 00:19:14.450 "name": "nvme0", 00:19:14.450 "trtype": "TCP", 00:19:14.450 "adrfam": "IPv4", 00:19:14.450 "traddr": "10.0.0.2", 00:19:14.450 "trsvcid": "4420", 00:19:14.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.450 "prchk_reftag": false, 00:19:14.450 "prchk_guard": false, 00:19:14.450 "ctrlr_loss_timeout_sec": 0, 00:19:14.450 "reconnect_delay_sec": 0, 00:19:14.450 "fast_io_fail_timeout_sec": 0, 00:19:14.450 "psk": "key0", 00:19:14.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.450 "hdgst": false, 00:19:14.450 "ddgst": false 00:19:14.450 } 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "method": "bdev_nvme_set_hotplug", 00:19:14.450 "params": { 00:19:14.450 "period_us": 100000, 00:19:14.450 "enable": false 00:19:14.450 } 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "method": "bdev_enable_histogram", 00:19:14.450 "params": { 00:19:14.450 "name": "nvme0n1", 00:19:14.450 "enable": true 00:19:14.450 } 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "method": "bdev_wait_for_examine" 00:19:14.450 } 00:19:14.450 ] 00:19:14.450 }, 00:19:14.450 { 00:19:14.450 "subsystem": "nbd", 00:19:14.450 "config": [] 00:19:14.450 } 00:19:14.450 ] 00:19:14.450 }' 00:19:14.450 17:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.450 17:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.450 [2024-07-15 17:01:21.096018] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:19:14.450 [2024-07-15 17:01:21.096062] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117193 ] 00:19:14.450 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.710 [2024-07-15 17:01:21.149498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.710 [2024-07-15 17:01:21.222105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.710 [2024-07-15 17:01:21.373245] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.278 17:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.278 17:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:15.278 17:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:15.278 17:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:15.537 17:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.537 17:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:15.537 Running I/O for 1 seconds... 00:19:16.913 00:19:16.913 Latency(us) 00:19:16.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.913 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:16.913 Verification LBA range: start 0x0 length 0x2000 00:19:16.913 nvme0n1 : 1.04 3039.65 11.87 0.00 0.00 41468.64 4900.95 43766.65 00:19:16.913 =================================================================================================================== 00:19:16.913 Total : 3039.65 11.87 0.00 0.00 41468.64 4900.95 43766.65 00:19:16.913 0 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:16.913 nvmf_trace.0 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 117193 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 117193 ']' 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 117193 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117193 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117193' 00:19:16.913 killing process with pid 117193 00:19:16.913 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 117193 00:19:16.913 Received shutdown signal, test time was about 1.000000 seconds 00:19:16.913 00:19:16.914 Latency(us) 00:19:16.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.914 =================================================================================================================== 00:19:16.914 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 117193 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.914 rmmod nvme_tcp 00:19:16.914 rmmod nvme_fabrics 00:19:16.914 rmmod nvme_keyring 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 117032 ']' 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 117032 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 117032 ']' 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 117032 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:16.914 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.171 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117032 00:19:17.171 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:17.171 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:17.171 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117032' 00:19:17.171 killing process with pid 117032 00:19:17.171 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 117032 00:19:17.171 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 117032 00:19:17.171 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:17.171 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:17.171 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:17.171 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.171 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:17.171 17:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.172 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.172 17:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.729 17:01:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:19.729 17:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.JDqONlHdxr /tmp/tmp.YMq2QTIsYv /tmp/tmp.1Bh9UJOzOm 00:19:19.729 00:19:19.729 real 1m23.365s 00:19:19.729 user 2m9.169s 00:19:19.729 sys 0m27.823s 00:19:19.729 17:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:19.729 17:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.729 ************************************ 00:19:19.729 END TEST nvmf_tls 00:19:19.729 ************************************ 00:19:19.729 17:01:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:19.729 17:01:25 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:19.729 17:01:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:19.729 17:01:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.729 17:01:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:19.729 ************************************ 00:19:19.729 START TEST nvmf_fips 00:19:19.730 ************************************ 00:19:19.730 17:01:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:19.730 * Looking for test storage... 00:19:19.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:19.730 Error setting digest 00:19:19.730 00D22F0A7B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:19.730 00D22F0A7B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:19.730 17:01:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:24.994 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:24.994 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:24.994 Found net devices under 0000:86:00.0: cvl_0_0 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:24.994 Found net devices under 0000:86:00.1: cvl_0_1 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.994 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:24.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:19:24.995 00:19:24.995 --- 10.0.0.2 ping statistics --- 00:19:24.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.995 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:19:24.995 00:19:24.995 --- 10.0.0.1 ping statistics --- 00:19:24.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.995 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=121203 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 121203 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 121203 ']' 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.995 17:01:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:25.253 [2024-07-15 17:01:31.684871] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:19:25.253 [2024-07-15 17:01:31.684924] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.253 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.253 [2024-07-15 17:01:31.744606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.253 [2024-07-15 17:01:31.814879] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.253 [2024-07-15 17:01:31.814921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.253 [2024-07-15 17:01:31.814927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.253 [2024-07-15 17:01:31.814933] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.253 [2024-07-15 17:01:31.814938] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.253 [2024-07-15 17:01:31.814960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.817 17:01:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.818 17:01:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:25.818 17:01:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.818 17:01:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:25.818 17:01:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:26.075 [2024-07-15 17:01:32.652774] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.075 [2024-07-15 17:01:32.668773] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:26.075 [2024-07-15 17:01:32.668964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.075 [2024-07-15 17:01:32.697112] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:26.075 malloc0 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=121269 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 121269 /var/tmp/bdevperf.sock 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 121269 ']' 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.075 17:01:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:26.333 [2024-07-15 17:01:32.775995] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:19:26.333 [2024-07-15 17:01:32.776044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121269 ] 00:19:26.333 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.333 [2024-07-15 17:01:32.827321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.333 [2024-07-15 17:01:32.900798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.897 17:01:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.897 17:01:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:26.897 17:01:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:27.153 [2024-07-15 17:01:33.711461] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.153 [2024-07-15 17:01:33.711544] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:27.153 TLSTESTn1 00:19:27.153 17:01:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:27.410 Running I/O for 10 seconds... 00:19:37.377 00:19:37.377 Latency(us) 00:19:37.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.377 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:37.377 Verification LBA range: start 0x0 length 0x2000 00:19:37.377 TLSTESTn1 : 10.02 5354.41 20.92 0.00 0.00 23867.76 6468.12 46274.11 00:19:37.377 =================================================================================================================== 00:19:37.377 Total : 5354.41 20.92 0.00 0.00 23867.76 6468.12 46274.11 00:19:37.377 0 00:19:37.377 17:01:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:37.377 17:01:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:37.377 17:01:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:37.377 17:01:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:37.377 17:01:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:37.377 17:01:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:37.377 17:01:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:37.377 17:01:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:37.377 17:01:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:37.377 17:01:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:37.377 nvmf_trace.0 00:19:37.377 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:37.377 17:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 121269 00:19:37.377 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 121269 ']' 00:19:37.377 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 121269 00:19:37.377 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:37.377 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.377 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121269 00:19:37.634 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121269' 00:19:37.635 killing process with pid 121269 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 121269 00:19:37.635 Received shutdown signal, test time was about 10.000000 seconds 00:19:37.635 00:19:37.635 Latency(us) 00:19:37.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.635 =================================================================================================================== 00:19:37.635 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:37.635 [2024-07-15 17:01:44.056739] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 121269 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:37.635 rmmod nvme_tcp 00:19:37.635 rmmod nvme_fabrics 00:19:37.635 rmmod nvme_keyring 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 121203 ']' 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 121203 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 121203 ']' 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 121203 00:19:37.635 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121203 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121203' 00:19:37.893 killing process with pid 121203 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 121203 00:19:37.893 [2024-07-15 17:01:44.341995] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 121203 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.893 17:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.442 17:01:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:40.442 17:01:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:40.442 00:19:40.442 real 0m20.657s 00:19:40.442 user 0m22.727s 00:19:40.442 sys 0m8.683s 00:19:40.442 17:01:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:40.442 17:01:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:40.442 ************************************ 00:19:40.442 END TEST nvmf_fips 00:19:40.442 ************************************ 00:19:40.442 17:01:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:40.442 17:01:46 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:40.442 17:01:46 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:40.442 17:01:46 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:40.442 17:01:46 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:40.442 17:01:46 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:40.442 17:01:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.709 17:01:51 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.709 17:01:51 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:45.709 17:01:51 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:45.709 17:01:51 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:45.709 17:01:51 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:45.710 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:45.710 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:45.710 Found net devices under 0000:86:00.0: cvl_0_0 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:45.710 Found net devices under 0000:86:00.1: cvl_0_1 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:45.710 17:01:51 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:45.710 17:01:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:45.710 17:01:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.710 17:01:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.710 ************************************ 00:19:45.710 START TEST nvmf_perf_adq 00:19:45.710 ************************************ 00:19:45.710 17:01:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:45.710 * Looking for test storage... 00:19:45.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:45.710 17:01:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.975 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:50.976 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:50.976 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:50.976 Found net devices under 0000:86:00.0: cvl_0_0 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:50.976 Found net devices under 0000:86:00.1: cvl_0_1 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:50.976 17:01:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:51.912 17:01:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:53.819 17:02:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:59.087 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:59.087 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:59.088 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:59.088 Found net devices under 0000:86:00.0: cvl_0_0 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:59.088 Found net devices under 0000:86:00.1: cvl_0_1 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:59.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:19:59.088 00:19:59.088 --- 10.0.0.2 ping statistics --- 00:19:59.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.088 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:19:59.088 00:19:59.088 --- 10.0.0.1 ping statistics --- 00:19:59.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.088 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=131146 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 131146 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 131146 ']' 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.088 17:02:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.088 [2024-07-15 17:02:05.732964] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:19:59.088 [2024-07-15 17:02:05.733009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.344 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.344 [2024-07-15 17:02:05.793424] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:59.344 [2024-07-15 17:02:05.873786] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.344 [2024-07-15 17:02:05.873827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.344 [2024-07-15 17:02:05.873834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.344 [2024-07-15 17:02:05.873840] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.344 [2024-07-15 17:02:05.873845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.344 [2024-07-15 17:02:05.875247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.344 [2024-07-15 17:02:05.875267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.344 [2024-07-15 17:02:05.875354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.344 [2024-07-15 17:02:05.875355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.906 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.906 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:59.906 17:02:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.906 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:59.906 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.906 17:02:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.906 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:59.906 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:59.906 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:59.906 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.163 [2024-07-15 17:02:06.719910] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.163 Malloc1 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.163 [2024-07-15 17:02:06.767717] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=131341 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:00.163 17:02:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:00.163 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.684 17:02:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:02.684 17:02:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.684 17:02:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.684 17:02:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.684 17:02:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:02.684 "tick_rate": 2300000000, 00:20:02.684 "poll_groups": [ 00:20:02.684 { 00:20:02.684 "name": "nvmf_tgt_poll_group_000", 00:20:02.684 "admin_qpairs": 1, 00:20:02.684 "io_qpairs": 1, 00:20:02.684 "current_admin_qpairs": 1, 00:20:02.684 "current_io_qpairs": 1, 00:20:02.684 "pending_bdev_io": 0, 00:20:02.684 "completed_nvme_io": 20669, 00:20:02.684 "transports": [ 00:20:02.684 { 00:20:02.684 "trtype": "TCP" 00:20:02.684 } 00:20:02.684 ] 00:20:02.684 }, 00:20:02.684 { 00:20:02.684 "name": "nvmf_tgt_poll_group_001", 00:20:02.684 "admin_qpairs": 0, 00:20:02.684 "io_qpairs": 1, 00:20:02.684 "current_admin_qpairs": 0, 00:20:02.684 "current_io_qpairs": 1, 00:20:02.684 "pending_bdev_io": 0, 00:20:02.684 "completed_nvme_io": 20953, 00:20:02.684 "transports": [ 00:20:02.684 { 00:20:02.684 "trtype": "TCP" 00:20:02.684 } 00:20:02.684 ] 00:20:02.684 }, 00:20:02.684 { 00:20:02.684 "name": "nvmf_tgt_poll_group_002", 00:20:02.684 "admin_qpairs": 0, 00:20:02.684 "io_qpairs": 1, 00:20:02.684 "current_admin_qpairs": 0, 00:20:02.684 "current_io_qpairs": 1, 00:20:02.684 "pending_bdev_io": 0, 00:20:02.684 "completed_nvme_io": 20852, 00:20:02.684 "transports": [ 00:20:02.684 { 00:20:02.684 "trtype": "TCP" 00:20:02.684 } 00:20:02.684 ] 00:20:02.684 }, 00:20:02.684 { 00:20:02.684 "name": "nvmf_tgt_poll_group_003", 00:20:02.684 "admin_qpairs": 0, 00:20:02.684 "io_qpairs": 1, 00:20:02.684 "current_admin_qpairs": 0, 00:20:02.684 "current_io_qpairs": 1, 00:20:02.684 "pending_bdev_io": 0, 00:20:02.684 "completed_nvme_io": 20600, 00:20:02.684 "transports": [ 00:20:02.684 { 00:20:02.684 "trtype": "TCP" 00:20:02.684 } 00:20:02.684 ] 00:20:02.684 } 00:20:02.684 ] 00:20:02.684 }' 00:20:02.684 17:02:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:02.684 17:02:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:02.684 17:02:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:02.684 17:02:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:02.684 17:02:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 131341 00:20:10.859 Initializing NVMe Controllers 00:20:10.859 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:10.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:10.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:10.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:10.859 Initialization complete. Launching workers. 00:20:10.859 ======================================================== 00:20:10.859 Latency(us) 00:20:10.859 Device Information : IOPS MiB/s Average min max 00:20:10.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10850.40 42.38 5899.29 1928.71 10460.86 00:20:10.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11036.80 43.11 5799.81 2060.05 9747.41 00:20:10.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10958.50 42.81 5840.57 2385.07 9321.02 00:20:10.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10942.00 42.74 5849.38 1872.43 10456.54 00:20:10.859 ======================================================== 00:20:10.859 Total : 43787.69 171.05 5847.05 1872.43 10460.86 00:20:10.859 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:10.859 rmmod nvme_tcp 00:20:10.859 rmmod nvme_fabrics 00:20:10.859 rmmod nvme_keyring 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 131146 ']' 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 131146 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 131146 ']' 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 131146 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.859 17:02:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131146 00:20:10.859 17:02:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:10.859 17:02:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:10.859 17:02:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131146' 00:20:10.859 killing process with pid 131146 00:20:10.859 17:02:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 131146 00:20:10.859 17:02:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 131146 00:20:10.859 17:02:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:10.859 17:02:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:10.859 17:02:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:10.859 17:02:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.860 17:02:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:10.860 17:02:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.860 17:02:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.860 17:02:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.765 17:02:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.765 17:02:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:12.765 17:02:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:14.144 17:02:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:16.049 17:02:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:21.315 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:21.315 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:21.315 Found net devices under 0000:86:00.0: cvl_0_0 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:21.315 Found net devices under 0000:86:00.1: cvl_0_1 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.315 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:21.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:20:21.316 00:20:21.316 --- 10.0.0.2 ping statistics --- 00:20:21.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.316 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:20:21.316 00:20:21.316 --- 10.0.0.1 ping statistics --- 00:20:21.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.316 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:21.316 net.core.busy_poll = 1 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:21.316 net.core.busy_read = 1 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=135099 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 135099 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 135099 ']' 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.316 17:02:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:21.316 [2024-07-15 17:02:27.962726] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:20:21.316 [2024-07-15 17:02:27.962772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.573 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.573 [2024-07-15 17:02:28.022970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:21.573 [2024-07-15 17:02:28.108075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.573 [2024-07-15 17:02:28.108110] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.573 [2024-07-15 17:02:28.108117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.573 [2024-07-15 17:02:28.108123] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.573 [2024-07-15 17:02:28.108128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.573 [2024-07-15 17:02:28.108173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.573 [2024-07-15 17:02:28.108272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.573 [2024-07-15 17:02:28.108357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.573 [2024-07-15 17:02:28.108359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.137 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.137 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:22.137 17:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:22.137 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:22.137 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.137 17:02:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.137 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:22.137 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:22.137 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:22.137 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.137 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.395 [2024-07-15 17:02:28.938834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.395 Malloc1 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.395 [2024-07-15 17:02:28.986534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=135232 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:22.395 17:02:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:22.395 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.928 17:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:24.928 17:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.928 17:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.928 17:02:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.928 17:02:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:24.928 "tick_rate": 2300000000, 00:20:24.928 "poll_groups": [ 00:20:24.928 { 00:20:24.928 "name": "nvmf_tgt_poll_group_000", 00:20:24.928 "admin_qpairs": 1, 00:20:24.928 "io_qpairs": 1, 00:20:24.928 "current_admin_qpairs": 1, 00:20:24.928 "current_io_qpairs": 1, 00:20:24.928 "pending_bdev_io": 0, 00:20:24.928 "completed_nvme_io": 27446, 00:20:24.928 "transports": [ 00:20:24.928 { 00:20:24.928 "trtype": "TCP" 00:20:24.928 } 00:20:24.928 ] 00:20:24.928 }, 00:20:24.928 { 00:20:24.928 "name": "nvmf_tgt_poll_group_001", 00:20:24.928 "admin_qpairs": 0, 00:20:24.928 "io_qpairs": 3, 00:20:24.928 "current_admin_qpairs": 0, 00:20:24.928 "current_io_qpairs": 3, 00:20:24.928 "pending_bdev_io": 0, 00:20:24.928 "completed_nvme_io": 30810, 00:20:24.928 "transports": [ 00:20:24.928 { 00:20:24.928 "trtype": "TCP" 00:20:24.928 } 00:20:24.928 ] 00:20:24.928 }, 00:20:24.928 { 00:20:24.928 "name": "nvmf_tgt_poll_group_002", 00:20:24.928 "admin_qpairs": 0, 00:20:24.928 "io_qpairs": 0, 00:20:24.928 "current_admin_qpairs": 0, 00:20:24.928 "current_io_qpairs": 0, 00:20:24.928 "pending_bdev_io": 0, 00:20:24.928 "completed_nvme_io": 0, 00:20:24.928 "transports": [ 00:20:24.928 { 00:20:24.928 "trtype": "TCP" 00:20:24.928 } 00:20:24.928 ] 00:20:24.928 }, 00:20:24.928 { 00:20:24.928 "name": "nvmf_tgt_poll_group_003", 00:20:24.928 "admin_qpairs": 0, 00:20:24.928 "io_qpairs": 0, 00:20:24.928 "current_admin_qpairs": 0, 00:20:24.928 "current_io_qpairs": 0, 00:20:24.928 "pending_bdev_io": 0, 00:20:24.928 "completed_nvme_io": 0, 00:20:24.928 "transports": [ 00:20:24.928 { 00:20:24.928 "trtype": "TCP" 00:20:24.928 } 00:20:24.928 ] 00:20:24.928 } 00:20:24.928 ] 00:20:24.928 }' 00:20:24.928 17:02:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:24.928 17:02:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:24.928 17:02:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:24.928 17:02:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:24.928 17:02:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 135232 00:20:33.045 Initializing NVMe Controllers 00:20:33.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:33.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:33.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:33.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:33.045 Initialization complete. Launching workers. 00:20:33.045 ======================================================== 00:20:33.045 Latency(us) 00:20:33.045 Device Information : IOPS MiB/s Average min max 00:20:33.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4908.20 19.17 13046.75 1627.58 58511.49 00:20:33.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5094.80 19.90 12606.13 1468.83 57051.29 00:20:33.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14787.10 57.76 4341.45 1167.04 45424.90 00:20:33.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6060.20 23.67 10597.29 1516.00 56741.10 00:20:33.045 ======================================================== 00:20:33.045 Total : 30850.29 120.51 8320.21 1167.04 58511.49 00:20:33.045 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:33.045 rmmod nvme_tcp 00:20:33.045 rmmod nvme_fabrics 00:20:33.045 rmmod nvme_keyring 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 135099 ']' 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 135099 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 135099 ']' 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 135099 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 135099 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 135099' 00:20:33.045 killing process with pid 135099 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 135099 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 135099 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.045 17:02:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.383 17:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:36.383 17:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:36.383 00:20:36.383 real 0m50.601s 00:20:36.383 user 2m49.541s 00:20:36.383 sys 0m9.313s 00:20:36.383 17:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:36.383 17:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.383 ************************************ 00:20:36.383 END TEST nvmf_perf_adq 00:20:36.383 ************************************ 00:20:36.383 17:02:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:36.383 17:02:42 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:36.383 17:02:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:36.383 17:02:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:36.383 17:02:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:36.383 ************************************ 00:20:36.383 START TEST nvmf_shutdown 00:20:36.383 ************************************ 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:36.383 * Looking for test storage... 00:20:36.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:36.383 ************************************ 00:20:36.383 START TEST nvmf_shutdown_tc1 00:20:36.383 ************************************ 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.383 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.384 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.384 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:36.384 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:36.384 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:36.384 17:02:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:41.650 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:41.650 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:41.650 Found net devices under 0000:86:00.0: cvl_0_0 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:41.650 Found net devices under 0000:86:00.1: cvl_0_1 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:41.650 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:41.651 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:41.651 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:41.651 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.651 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:41.651 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:41.651 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:41.651 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:41.651 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:41.651 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:41.651 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:41.651 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:41.910 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:41.910 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:41.910 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:41.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:20:41.910 00:20:41.910 --- 10.0.0.2 ping statistics --- 00:20:41.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.911 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:41.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:20:41.911 00:20:41.911 --- 10.0.0.1 ping statistics --- 00:20:41.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.911 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=140671 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 140671 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 140671 ']' 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.911 17:02:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.911 [2024-07-15 17:02:48.451045] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:20:41.911 [2024-07-15 17:02:48.451094] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.911 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.911 [2024-07-15 17:02:48.510038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.169 [2024-07-15 17:02:48.585166] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.169 [2024-07-15 17:02:48.585205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.169 [2024-07-15 17:02:48.585212] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.169 [2024-07-15 17:02:48.585217] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.169 [2024-07-15 17:02:48.585222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.169 [2024-07-15 17:02:48.585329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.169 [2024-07-15 17:02:48.585418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.169 [2024-07-15 17:02:48.585504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.169 [2024-07-15 17:02:48.585506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.737 [2024-07-15 17:02:49.305165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.737 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.737 Malloc1 00:20:42.737 [2024-07-15 17:02:49.400883] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.996 Malloc2 00:20:42.996 Malloc3 00:20:42.996 Malloc4 00:20:42.996 Malloc5 00:20:42.996 Malloc6 00:20:42.996 Malloc7 00:20:43.255 Malloc8 00:20:43.255 Malloc9 00:20:43.255 Malloc10 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=140950 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 140950 /var/tmp/bdevperf.sock 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 140950 ']' 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:43.255 { 00:20:43.255 "params": { 00:20:43.255 "name": "Nvme$subsystem", 00:20:43.255 "trtype": "$TEST_TRANSPORT", 00:20:43.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.255 "adrfam": "ipv4", 00:20:43.255 "trsvcid": "$NVMF_PORT", 00:20:43.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.255 "hdgst": ${hdgst:-false}, 00:20:43.255 "ddgst": ${ddgst:-false} 00:20:43.255 }, 00:20:43.255 "method": "bdev_nvme_attach_controller" 00:20:43.255 } 00:20:43.255 EOF 00:20:43.255 )") 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:43.255 { 00:20:43.255 "params": { 00:20:43.255 "name": "Nvme$subsystem", 00:20:43.255 "trtype": "$TEST_TRANSPORT", 00:20:43.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.255 "adrfam": "ipv4", 00:20:43.255 "trsvcid": "$NVMF_PORT", 00:20:43.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.255 "hdgst": ${hdgst:-false}, 00:20:43.255 "ddgst": ${ddgst:-false} 00:20:43.255 }, 00:20:43.255 "method": "bdev_nvme_attach_controller" 00:20:43.255 } 00:20:43.255 EOF 00:20:43.255 )") 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:43.255 { 00:20:43.255 "params": { 00:20:43.255 "name": "Nvme$subsystem", 00:20:43.255 "trtype": "$TEST_TRANSPORT", 00:20:43.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.255 "adrfam": "ipv4", 00:20:43.255 "trsvcid": "$NVMF_PORT", 00:20:43.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.255 "hdgst": ${hdgst:-false}, 00:20:43.255 "ddgst": ${ddgst:-false} 00:20:43.255 }, 00:20:43.255 "method": "bdev_nvme_attach_controller" 00:20:43.255 } 00:20:43.255 EOF 00:20:43.255 )") 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:43.255 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:43.255 { 00:20:43.255 "params": { 00:20:43.255 "name": "Nvme$subsystem", 00:20:43.255 "trtype": "$TEST_TRANSPORT", 00:20:43.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.255 "adrfam": "ipv4", 00:20:43.255 "trsvcid": "$NVMF_PORT", 00:20:43.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.255 "hdgst": ${hdgst:-false}, 00:20:43.255 "ddgst": ${ddgst:-false} 00:20:43.255 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 } 00:20:43.256 EOF 00:20:43.256 )") 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:43.256 { 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme$subsystem", 00:20:43.256 "trtype": "$TEST_TRANSPORT", 00:20:43.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "$NVMF_PORT", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.256 "hdgst": ${hdgst:-false}, 00:20:43.256 "ddgst": ${ddgst:-false} 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 } 00:20:43.256 EOF 00:20:43.256 )") 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:43.256 { 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme$subsystem", 00:20:43.256 "trtype": "$TEST_TRANSPORT", 00:20:43.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "$NVMF_PORT", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.256 "hdgst": ${hdgst:-false}, 00:20:43.256 "ddgst": ${ddgst:-false} 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 } 00:20:43.256 EOF 00:20:43.256 )") 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:43.256 { 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme$subsystem", 00:20:43.256 "trtype": "$TEST_TRANSPORT", 00:20:43.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "$NVMF_PORT", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.256 "hdgst": ${hdgst:-false}, 00:20:43.256 "ddgst": ${ddgst:-false} 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 } 00:20:43.256 EOF 00:20:43.256 )") 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:43.256 [2024-07-15 17:02:49.872364] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:20:43.256 [2024-07-15 17:02:49.872412] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:43.256 { 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme$subsystem", 00:20:43.256 "trtype": "$TEST_TRANSPORT", 00:20:43.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "$NVMF_PORT", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.256 "hdgst": ${hdgst:-false}, 00:20:43.256 "ddgst": ${ddgst:-false} 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 } 00:20:43.256 EOF 00:20:43.256 )") 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:43.256 { 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme$subsystem", 00:20:43.256 "trtype": "$TEST_TRANSPORT", 00:20:43.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "$NVMF_PORT", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.256 "hdgst": ${hdgst:-false}, 00:20:43.256 "ddgst": ${ddgst:-false} 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 } 00:20:43.256 EOF 00:20:43.256 )") 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:43.256 { 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme$subsystem", 00:20:43.256 "trtype": "$TEST_TRANSPORT", 00:20:43.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "$NVMF_PORT", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.256 "hdgst": ${hdgst:-false}, 00:20:43.256 "ddgst": ${ddgst:-false} 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 } 00:20:43.256 EOF 00:20:43.256 )") 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:43.256 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:43.256 17:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme1", 00:20:43.256 "trtype": "tcp", 00:20:43.256 "traddr": "10.0.0.2", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "4420", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.256 "hdgst": false, 00:20:43.256 "ddgst": false 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 },{ 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme2", 00:20:43.256 "trtype": "tcp", 00:20:43.256 "traddr": "10.0.0.2", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "4420", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:43.256 "hdgst": false, 00:20:43.256 "ddgst": false 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 },{ 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme3", 00:20:43.256 "trtype": "tcp", 00:20:43.256 "traddr": "10.0.0.2", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "4420", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:43.256 "hdgst": false, 00:20:43.256 "ddgst": false 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 },{ 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme4", 00:20:43.256 "trtype": "tcp", 00:20:43.256 "traddr": "10.0.0.2", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "4420", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:43.256 "hdgst": false, 00:20:43.256 "ddgst": false 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 },{ 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme5", 00:20:43.256 "trtype": "tcp", 00:20:43.256 "traddr": "10.0.0.2", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "4420", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:43.256 "hdgst": false, 00:20:43.256 "ddgst": false 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 },{ 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme6", 00:20:43.256 "trtype": "tcp", 00:20:43.256 "traddr": "10.0.0.2", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "4420", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:43.256 "hdgst": false, 00:20:43.256 "ddgst": false 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 },{ 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme7", 00:20:43.256 "trtype": "tcp", 00:20:43.256 "traddr": "10.0.0.2", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "4420", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:43.256 "hdgst": false, 00:20:43.256 "ddgst": false 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 },{ 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme8", 00:20:43.256 "trtype": "tcp", 00:20:43.256 "traddr": "10.0.0.2", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "4420", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:43.256 "hdgst": false, 00:20:43.256 "ddgst": false 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 },{ 00:20:43.256 "params": { 00:20:43.256 "name": "Nvme9", 00:20:43.256 "trtype": "tcp", 00:20:43.256 "traddr": "10.0.0.2", 00:20:43.256 "adrfam": "ipv4", 00:20:43.256 "trsvcid": "4420", 00:20:43.256 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:43.256 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:43.256 "hdgst": false, 00:20:43.256 "ddgst": false 00:20:43.256 }, 00:20:43.256 "method": "bdev_nvme_attach_controller" 00:20:43.256 },{ 00:20:43.256 "params": { 00:20:43.257 "name": "Nvme10", 00:20:43.257 "trtype": "tcp", 00:20:43.257 "traddr": "10.0.0.2", 00:20:43.257 "adrfam": "ipv4", 00:20:43.257 "trsvcid": "4420", 00:20:43.257 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:43.257 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:43.257 "hdgst": false, 00:20:43.257 "ddgst": false 00:20:43.257 }, 00:20:43.257 "method": "bdev_nvme_attach_controller" 00:20:43.257 }' 00:20:43.514 [2024-07-15 17:02:49.927253] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.514 [2024-07-15 17:02:50.000695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.891 17:02:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.891 17:02:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:44.891 17:02:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:44.891 17:02:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.891 17:02:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:44.891 17:02:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.891 17:02:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 140950 00:20:44.891 17:02:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:44.891 17:02:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:45.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 140950 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:45.823 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 140671 00:20:45.823 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:45.823 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:45.823 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:45.823 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:45.823 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.823 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.823 { 00:20:45.823 "params": { 00:20:45.823 "name": "Nvme$subsystem", 00:20:45.823 "trtype": "$TEST_TRANSPORT", 00:20:45.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.823 "adrfam": "ipv4", 00:20:45.823 "trsvcid": "$NVMF_PORT", 00:20:45.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.823 "hdgst": ${hdgst:-false}, 00:20:45.823 "ddgst": ${ddgst:-false} 00:20:45.823 }, 00:20:45.823 "method": "bdev_nvme_attach_controller" 00:20:45.823 } 00:20:45.823 EOF 00:20:45.823 )") 00:20:45.823 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.823 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.823 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.823 { 00:20:45.823 "params": { 00:20:45.823 "name": "Nvme$subsystem", 00:20:45.823 "trtype": "$TEST_TRANSPORT", 00:20:45.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.823 "adrfam": "ipv4", 00:20:45.823 "trsvcid": "$NVMF_PORT", 00:20:45.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.823 "hdgst": ${hdgst:-false}, 00:20:45.823 "ddgst": ${ddgst:-false} 00:20:45.823 }, 00:20:45.823 "method": "bdev_nvme_attach_controller" 00:20:45.823 } 00:20:45.823 EOF 00:20:45.823 )") 00:20:45.823 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.823 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.823 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.823 { 00:20:45.823 "params": { 00:20:45.823 "name": "Nvme$subsystem", 00:20:45.823 "trtype": "$TEST_TRANSPORT", 00:20:45.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.823 "adrfam": "ipv4", 00:20:45.823 "trsvcid": "$NVMF_PORT", 00:20:45.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.823 "hdgst": ${hdgst:-false}, 00:20:45.823 "ddgst": ${ddgst:-false} 00:20:45.823 }, 00:20:45.824 "method": "bdev_nvme_attach_controller" 00:20:45.824 } 00:20:45.824 EOF 00:20:45.824 )") 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.824 { 00:20:45.824 "params": { 00:20:45.824 "name": "Nvme$subsystem", 00:20:45.824 "trtype": "$TEST_TRANSPORT", 00:20:45.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.824 "adrfam": "ipv4", 00:20:45.824 "trsvcid": "$NVMF_PORT", 00:20:45.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.824 "hdgst": ${hdgst:-false}, 00:20:45.824 "ddgst": ${ddgst:-false} 00:20:45.824 }, 00:20:45.824 "method": "bdev_nvme_attach_controller" 00:20:45.824 } 00:20:45.824 EOF 00:20:45.824 )") 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.824 { 00:20:45.824 "params": { 00:20:45.824 "name": "Nvme$subsystem", 00:20:45.824 "trtype": "$TEST_TRANSPORT", 00:20:45.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.824 "adrfam": "ipv4", 00:20:45.824 "trsvcid": "$NVMF_PORT", 00:20:45.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.824 "hdgst": ${hdgst:-false}, 00:20:45.824 "ddgst": ${ddgst:-false} 00:20:45.824 }, 00:20:45.824 "method": "bdev_nvme_attach_controller" 00:20:45.824 } 00:20:45.824 EOF 00:20:45.824 )") 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.824 { 00:20:45.824 "params": { 00:20:45.824 "name": "Nvme$subsystem", 00:20:45.824 "trtype": "$TEST_TRANSPORT", 00:20:45.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.824 "adrfam": "ipv4", 00:20:45.824 "trsvcid": "$NVMF_PORT", 00:20:45.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.824 "hdgst": ${hdgst:-false}, 00:20:45.824 "ddgst": ${ddgst:-false} 00:20:45.824 }, 00:20:45.824 "method": "bdev_nvme_attach_controller" 00:20:45.824 } 00:20:45.824 EOF 00:20:45.824 )") 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.824 { 00:20:45.824 "params": { 00:20:45.824 "name": "Nvme$subsystem", 00:20:45.824 "trtype": "$TEST_TRANSPORT", 00:20:45.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.824 "adrfam": "ipv4", 00:20:45.824 "trsvcid": "$NVMF_PORT", 00:20:45.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.824 "hdgst": ${hdgst:-false}, 00:20:45.824 "ddgst": ${ddgst:-false} 00:20:45.824 }, 00:20:45.824 "method": "bdev_nvme_attach_controller" 00:20:45.824 } 00:20:45.824 EOF 00:20:45.824 )") 00:20:45.824 [2024-07-15 17:02:52.407532] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:20:45.824 [2024-07-15 17:02:52.407581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141438 ] 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.824 { 00:20:45.824 "params": { 00:20:45.824 "name": "Nvme$subsystem", 00:20:45.824 "trtype": "$TEST_TRANSPORT", 00:20:45.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.824 "adrfam": "ipv4", 00:20:45.824 "trsvcid": "$NVMF_PORT", 00:20:45.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.824 "hdgst": ${hdgst:-false}, 00:20:45.824 "ddgst": ${ddgst:-false} 00:20:45.824 }, 00:20:45.824 "method": "bdev_nvme_attach_controller" 00:20:45.824 } 00:20:45.824 EOF 00:20:45.824 )") 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.824 { 00:20:45.824 "params": { 00:20:45.824 "name": "Nvme$subsystem", 00:20:45.824 "trtype": "$TEST_TRANSPORT", 00:20:45.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.824 "adrfam": "ipv4", 00:20:45.824 "trsvcid": "$NVMF_PORT", 00:20:45.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.824 "hdgst": ${hdgst:-false}, 00:20:45.824 "ddgst": ${ddgst:-false} 00:20:45.824 }, 00:20:45.824 "method": "bdev_nvme_attach_controller" 00:20:45.824 } 00:20:45.824 EOF 00:20:45.824 )") 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.824 { 00:20:45.824 "params": { 00:20:45.824 "name": "Nvme$subsystem", 00:20:45.824 "trtype": "$TEST_TRANSPORT", 00:20:45.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.824 "adrfam": "ipv4", 00:20:45.824 "trsvcid": "$NVMF_PORT", 00:20:45.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.824 "hdgst": ${hdgst:-false}, 00:20:45.824 "ddgst": ${ddgst:-false} 00:20:45.824 }, 00:20:45.824 "method": "bdev_nvme_attach_controller" 00:20:45.824 } 00:20:45.824 EOF 00:20:45.824 )") 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.824 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:45.824 17:02:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:45.824 "params": { 00:20:45.824 "name": "Nvme1", 00:20:45.824 "trtype": "tcp", 00:20:45.824 "traddr": "10.0.0.2", 00:20:45.824 "adrfam": "ipv4", 00:20:45.824 "trsvcid": "4420", 00:20:45.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.824 "hdgst": false, 00:20:45.824 "ddgst": false 00:20:45.824 }, 00:20:45.824 "method": "bdev_nvme_attach_controller" 00:20:45.824 },{ 00:20:45.824 "params": { 00:20:45.824 "name": "Nvme2", 00:20:45.824 "trtype": "tcp", 00:20:45.824 "traddr": "10.0.0.2", 00:20:45.824 "adrfam": "ipv4", 00:20:45.824 "trsvcid": "4420", 00:20:45.824 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:45.824 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:45.824 "hdgst": false, 00:20:45.824 "ddgst": false 00:20:45.824 }, 00:20:45.824 "method": "bdev_nvme_attach_controller" 00:20:45.824 },{ 00:20:45.824 "params": { 00:20:45.824 "name": "Nvme3", 00:20:45.824 "trtype": "tcp", 00:20:45.824 "traddr": "10.0.0.2", 00:20:45.824 "adrfam": "ipv4", 00:20:45.824 "trsvcid": "4420", 00:20:45.824 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:45.824 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:45.824 "hdgst": false, 00:20:45.824 "ddgst": false 00:20:45.824 }, 00:20:45.824 "method": "bdev_nvme_attach_controller" 00:20:45.824 },{ 00:20:45.824 "params": { 00:20:45.824 "name": "Nvme4", 00:20:45.824 "trtype": "tcp", 00:20:45.824 "traddr": "10.0.0.2", 00:20:45.824 "adrfam": "ipv4", 00:20:45.824 "trsvcid": "4420", 00:20:45.824 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:45.824 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:45.824 "hdgst": false, 00:20:45.824 "ddgst": false 00:20:45.824 }, 00:20:45.824 "method": "bdev_nvme_attach_controller" 00:20:45.824 },{ 00:20:45.824 "params": { 00:20:45.824 "name": "Nvme5", 00:20:45.824 "trtype": "tcp", 00:20:45.824 "traddr": "10.0.0.2", 00:20:45.824 "adrfam": "ipv4", 00:20:45.824 "trsvcid": "4420", 00:20:45.824 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:45.824 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:45.824 "hdgst": false, 00:20:45.824 "ddgst": false 00:20:45.824 }, 00:20:45.825 "method": "bdev_nvme_attach_controller" 00:20:45.825 },{ 00:20:45.825 "params": { 00:20:45.825 "name": "Nvme6", 00:20:45.825 "trtype": "tcp", 00:20:45.825 "traddr": "10.0.0.2", 00:20:45.825 "adrfam": "ipv4", 00:20:45.825 "trsvcid": "4420", 00:20:45.825 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:45.825 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:45.825 "hdgst": false, 00:20:45.825 "ddgst": false 00:20:45.825 }, 00:20:45.825 "method": "bdev_nvme_attach_controller" 00:20:45.825 },{ 00:20:45.825 "params": { 00:20:45.825 "name": "Nvme7", 00:20:45.825 "trtype": "tcp", 00:20:45.825 "traddr": "10.0.0.2", 00:20:45.825 "adrfam": "ipv4", 00:20:45.825 "trsvcid": "4420", 00:20:45.825 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:45.825 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:45.825 "hdgst": false, 00:20:45.825 "ddgst": false 00:20:45.825 }, 00:20:45.825 "method": "bdev_nvme_attach_controller" 00:20:45.825 },{ 00:20:45.825 "params": { 00:20:45.825 "name": "Nvme8", 00:20:45.825 "trtype": "tcp", 00:20:45.825 "traddr": "10.0.0.2", 00:20:45.825 "adrfam": "ipv4", 00:20:45.825 "trsvcid": "4420", 00:20:45.825 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:45.825 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:45.825 "hdgst": false, 00:20:45.825 "ddgst": false 00:20:45.825 }, 00:20:45.825 "method": "bdev_nvme_attach_controller" 00:20:45.825 },{ 00:20:45.825 "params": { 00:20:45.825 "name": "Nvme9", 00:20:45.825 "trtype": "tcp", 00:20:45.825 "traddr": "10.0.0.2", 00:20:45.825 "adrfam": "ipv4", 00:20:45.825 "trsvcid": "4420", 00:20:45.825 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:45.825 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:45.825 "hdgst": false, 00:20:45.825 "ddgst": false 00:20:45.825 }, 00:20:45.825 "method": "bdev_nvme_attach_controller" 00:20:45.825 },{ 00:20:45.825 "params": { 00:20:45.825 "name": "Nvme10", 00:20:45.825 "trtype": "tcp", 00:20:45.825 "traddr": "10.0.0.2", 00:20:45.825 "adrfam": "ipv4", 00:20:45.825 "trsvcid": "4420", 00:20:45.825 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:45.825 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:45.825 "hdgst": false, 00:20:45.825 "ddgst": false 00:20:45.825 }, 00:20:45.825 "method": "bdev_nvme_attach_controller" 00:20:45.825 }' 00:20:45.825 [2024-07-15 17:02:52.463232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.083 [2024-07-15 17:02:52.538131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.456 Running I/O for 1 seconds... 00:20:48.831 00:20:48.831 Latency(us) 00:20:48.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.831 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.831 Verification LBA range: start 0x0 length 0x400 00:20:48.831 Nvme1n1 : 1.00 255.18 15.95 0.00 0.00 248405.93 19033.93 216097.84 00:20:48.831 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.831 Verification LBA range: start 0x0 length 0x400 00:20:48.831 Nvme2n1 : 1.13 282.91 17.68 0.00 0.00 221094.42 16070.57 221568.67 00:20:48.831 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.831 Verification LBA range: start 0x0 length 0x400 00:20:48.831 Nvme3n1 : 1.12 286.48 17.91 0.00 0.00 215045.52 15956.59 215186.03 00:20:48.831 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.831 Verification LBA range: start 0x0 length 0x400 00:20:48.831 Nvme4n1 : 1.14 281.57 17.60 0.00 0.00 215608.63 16184.54 200597.15 00:20:48.831 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.831 Verification LBA range: start 0x0 length 0x400 00:20:48.831 Nvme5n1 : 1.13 291.93 18.25 0.00 0.00 201782.62 10086.85 211538.81 00:20:48.831 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.831 Verification LBA range: start 0x0 length 0x400 00:20:48.831 Nvme6n1 : 1.14 283.81 17.74 0.00 0.00 207625.82 2194.03 214274.23 00:20:48.831 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.831 Verification LBA range: start 0x0 length 0x400 00:20:48.831 Nvme7n1 : 1.15 278.88 17.43 0.00 0.00 208288.90 29063.79 203332.56 00:20:48.831 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.831 Verification LBA range: start 0x0 length 0x400 00:20:48.831 Nvme8n1 : 1.15 278.71 17.42 0.00 0.00 205415.11 14930.81 218833.25 00:20:48.831 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.831 Verification LBA range: start 0x0 length 0x400 00:20:48.831 Nvme9n1 : 1.15 277.74 17.36 0.00 0.00 203233.37 17096.35 240716.58 00:20:48.831 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.831 Verification LBA range: start 0x0 length 0x400 00:20:48.831 Nvme10n1 : 1.16 276.93 17.31 0.00 0.00 200834.36 18350.08 224304.08 00:20:48.831 =================================================================================================================== 00:20:48.831 Total : 2794.15 174.63 0.00 0.00 211964.21 2194.03 240716.58 00:20:48.831 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:48.832 rmmod nvme_tcp 00:20:48.832 rmmod nvme_fabrics 00:20:48.832 rmmod nvme_keyring 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 140671 ']' 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 140671 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 140671 ']' 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 140671 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140671 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140671' 00:20:48.832 killing process with pid 140671 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 140671 00:20:48.832 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 140671 00:20:49.397 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:49.397 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:49.397 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:49.397 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.398 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.398 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.398 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.398 17:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:51.317 00:20:51.317 real 0m15.081s 00:20:51.317 user 0m34.383s 00:20:51.317 sys 0m5.452s 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.317 ************************************ 00:20:51.317 END TEST nvmf_shutdown_tc1 00:20:51.317 ************************************ 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:51.317 ************************************ 00:20:51.317 START TEST nvmf_shutdown_tc2 00:20:51.317 ************************************ 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:51.317 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:51.318 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:51.318 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:51.318 Found net devices under 0000:86:00.0: cvl_0_0 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:51.318 Found net devices under 0000:86:00.1: cvl_0_1 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:51.318 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.577 17:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:51.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:20:51.577 00:20:51.577 --- 10.0.0.2 ping statistics --- 00:20:51.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.577 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:20:51.577 00:20:51.577 --- 10.0.0.1 ping statistics --- 00:20:51.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.577 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=142458 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 142458 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 142458 ']' 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.577 17:02:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.835 [2024-07-15 17:02:58.279887] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:20:51.835 [2024-07-15 17:02:58.279935] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.835 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.835 [2024-07-15 17:02:58.337945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.835 [2024-07-15 17:02:58.410730] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.835 [2024-07-15 17:02:58.410767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.835 [2024-07-15 17:02:58.410774] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.835 [2024-07-15 17:02:58.410780] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.835 [2024-07-15 17:02:58.410785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.835 [2024-07-15 17:02:58.410882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.835 [2024-07-15 17:02:58.410967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:51.835 [2024-07-15 17:02:58.411095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.835 [2024-07-15 17:02:58.411096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.770 [2024-07-15 17:02:59.130191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.770 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.770 Malloc1 00:20:52.770 [2024-07-15 17:02:59.225894] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.770 Malloc2 00:20:52.770 Malloc3 00:20:52.770 Malloc4 00:20:52.770 Malloc5 00:20:52.770 Malloc6 00:20:53.029 Malloc7 00:20:53.029 Malloc8 00:20:53.029 Malloc9 00:20:53.029 Malloc10 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=142735 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 142735 /var/tmp/bdevperf.sock 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 142735 ']' 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:53.029 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.030 { 00:20:53.030 "params": { 00:20:53.030 "name": "Nvme$subsystem", 00:20:53.030 "trtype": "$TEST_TRANSPORT", 00:20:53.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.030 "adrfam": "ipv4", 00:20:53.030 "trsvcid": "$NVMF_PORT", 00:20:53.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.030 "hdgst": ${hdgst:-false}, 00:20:53.030 "ddgst": ${ddgst:-false} 00:20:53.030 }, 00:20:53.030 "method": "bdev_nvme_attach_controller" 00:20:53.030 } 00:20:53.030 EOF 00:20:53.030 )") 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.030 { 00:20:53.030 "params": { 00:20:53.030 "name": "Nvme$subsystem", 00:20:53.030 "trtype": "$TEST_TRANSPORT", 00:20:53.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.030 "adrfam": "ipv4", 00:20:53.030 "trsvcid": "$NVMF_PORT", 00:20:53.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.030 "hdgst": ${hdgst:-false}, 00:20:53.030 "ddgst": ${ddgst:-false} 00:20:53.030 }, 00:20:53.030 "method": "bdev_nvme_attach_controller" 00:20:53.030 } 00:20:53.030 EOF 00:20:53.030 )") 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.030 { 00:20:53.030 "params": { 00:20:53.030 "name": "Nvme$subsystem", 00:20:53.030 "trtype": "$TEST_TRANSPORT", 00:20:53.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.030 "adrfam": "ipv4", 00:20:53.030 "trsvcid": "$NVMF_PORT", 00:20:53.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.030 "hdgst": ${hdgst:-false}, 00:20:53.030 "ddgst": ${ddgst:-false} 00:20:53.030 }, 00:20:53.030 "method": "bdev_nvme_attach_controller" 00:20:53.030 } 00:20:53.030 EOF 00:20:53.030 )") 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.030 { 00:20:53.030 "params": { 00:20:53.030 "name": "Nvme$subsystem", 00:20:53.030 "trtype": "$TEST_TRANSPORT", 00:20:53.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.030 "adrfam": "ipv4", 00:20:53.030 "trsvcid": "$NVMF_PORT", 00:20:53.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.030 "hdgst": ${hdgst:-false}, 00:20:53.030 "ddgst": ${ddgst:-false} 00:20:53.030 }, 00:20:53.030 "method": "bdev_nvme_attach_controller" 00:20:53.030 } 00:20:53.030 EOF 00:20:53.030 )") 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.030 { 00:20:53.030 "params": { 00:20:53.030 "name": "Nvme$subsystem", 00:20:53.030 "trtype": "$TEST_TRANSPORT", 00:20:53.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.030 "adrfam": "ipv4", 00:20:53.030 "trsvcid": "$NVMF_PORT", 00:20:53.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.030 "hdgst": ${hdgst:-false}, 00:20:53.030 "ddgst": ${ddgst:-false} 00:20:53.030 }, 00:20:53.030 "method": "bdev_nvme_attach_controller" 00:20:53.030 } 00:20:53.030 EOF 00:20:53.030 )") 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.030 { 00:20:53.030 "params": { 00:20:53.030 "name": "Nvme$subsystem", 00:20:53.030 "trtype": "$TEST_TRANSPORT", 00:20:53.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.030 "adrfam": "ipv4", 00:20:53.030 "trsvcid": "$NVMF_PORT", 00:20:53.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.030 "hdgst": ${hdgst:-false}, 00:20:53.030 "ddgst": ${ddgst:-false} 00:20:53.030 }, 00:20:53.030 "method": "bdev_nvme_attach_controller" 00:20:53.030 } 00:20:53.030 EOF 00:20:53.030 )") 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.030 { 00:20:53.030 "params": { 00:20:53.030 "name": "Nvme$subsystem", 00:20:53.030 "trtype": "$TEST_TRANSPORT", 00:20:53.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.030 "adrfam": "ipv4", 00:20:53.030 "trsvcid": "$NVMF_PORT", 00:20:53.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.030 "hdgst": ${hdgst:-false}, 00:20:53.030 "ddgst": ${ddgst:-false} 00:20:53.030 }, 00:20:53.030 "method": "bdev_nvme_attach_controller" 00:20:53.030 } 00:20:53.030 EOF 00:20:53.030 )") 00:20:53.030 [2024-07-15 17:02:59.689009] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:20:53.030 [2024-07-15 17:02:59.689059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142735 ] 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.030 { 00:20:53.030 "params": { 00:20:53.030 "name": "Nvme$subsystem", 00:20:53.030 "trtype": "$TEST_TRANSPORT", 00:20:53.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.030 "adrfam": "ipv4", 00:20:53.030 "trsvcid": "$NVMF_PORT", 00:20:53.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.030 "hdgst": ${hdgst:-false}, 00:20:53.030 "ddgst": ${ddgst:-false} 00:20:53.030 }, 00:20:53.030 "method": "bdev_nvme_attach_controller" 00:20:53.030 } 00:20:53.030 EOF 00:20:53.030 )") 00:20:53.030 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.289 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.289 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.289 { 00:20:53.289 "params": { 00:20:53.289 "name": "Nvme$subsystem", 00:20:53.289 "trtype": "$TEST_TRANSPORT", 00:20:53.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.289 "adrfam": "ipv4", 00:20:53.289 "trsvcid": "$NVMF_PORT", 00:20:53.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.289 "hdgst": ${hdgst:-false}, 00:20:53.289 "ddgst": ${ddgst:-false} 00:20:53.289 }, 00:20:53.289 "method": "bdev_nvme_attach_controller" 00:20:53.289 } 00:20:53.289 EOF 00:20:53.289 )") 00:20:53.289 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.289 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.289 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.289 { 00:20:53.289 "params": { 00:20:53.289 "name": "Nvme$subsystem", 00:20:53.289 "trtype": "$TEST_TRANSPORT", 00:20:53.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.289 "adrfam": "ipv4", 00:20:53.289 "trsvcid": "$NVMF_PORT", 00:20:53.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.289 "hdgst": ${hdgst:-false}, 00:20:53.289 "ddgst": ${ddgst:-false} 00:20:53.289 }, 00:20:53.289 "method": "bdev_nvme_attach_controller" 00:20:53.289 } 00:20:53.289 EOF 00:20:53.289 )") 00:20:53.289 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.289 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.289 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:53.289 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:53.289 17:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:53.289 "params": { 00:20:53.289 "name": "Nvme1", 00:20:53.289 "trtype": "tcp", 00:20:53.289 "traddr": "10.0.0.2", 00:20:53.289 "adrfam": "ipv4", 00:20:53.289 "trsvcid": "4420", 00:20:53.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.289 "hdgst": false, 00:20:53.289 "ddgst": false 00:20:53.289 }, 00:20:53.289 "method": "bdev_nvme_attach_controller" 00:20:53.289 },{ 00:20:53.289 "params": { 00:20:53.289 "name": "Nvme2", 00:20:53.289 "trtype": "tcp", 00:20:53.289 "traddr": "10.0.0.2", 00:20:53.289 "adrfam": "ipv4", 00:20:53.289 "trsvcid": "4420", 00:20:53.289 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:53.289 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:53.289 "hdgst": false, 00:20:53.289 "ddgst": false 00:20:53.289 }, 00:20:53.289 "method": "bdev_nvme_attach_controller" 00:20:53.289 },{ 00:20:53.289 "params": { 00:20:53.289 "name": "Nvme3", 00:20:53.289 "trtype": "tcp", 00:20:53.289 "traddr": "10.0.0.2", 00:20:53.289 "adrfam": "ipv4", 00:20:53.289 "trsvcid": "4420", 00:20:53.289 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:53.289 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:53.289 "hdgst": false, 00:20:53.289 "ddgst": false 00:20:53.289 }, 00:20:53.289 "method": "bdev_nvme_attach_controller" 00:20:53.289 },{ 00:20:53.289 "params": { 00:20:53.289 "name": "Nvme4", 00:20:53.289 "trtype": "tcp", 00:20:53.289 "traddr": "10.0.0.2", 00:20:53.289 "adrfam": "ipv4", 00:20:53.289 "trsvcid": "4420", 00:20:53.289 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:53.289 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:53.289 "hdgst": false, 00:20:53.289 "ddgst": false 00:20:53.289 }, 00:20:53.289 "method": "bdev_nvme_attach_controller" 00:20:53.289 },{ 00:20:53.289 "params": { 00:20:53.289 "name": "Nvme5", 00:20:53.289 "trtype": "tcp", 00:20:53.289 "traddr": "10.0.0.2", 00:20:53.289 "adrfam": "ipv4", 00:20:53.289 "trsvcid": "4420", 00:20:53.289 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:53.289 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:53.289 "hdgst": false, 00:20:53.289 "ddgst": false 00:20:53.289 }, 00:20:53.289 "method": "bdev_nvme_attach_controller" 00:20:53.289 },{ 00:20:53.289 "params": { 00:20:53.289 "name": "Nvme6", 00:20:53.289 "trtype": "tcp", 00:20:53.289 "traddr": "10.0.0.2", 00:20:53.289 "adrfam": "ipv4", 00:20:53.289 "trsvcid": "4420", 00:20:53.289 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:53.289 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:53.289 "hdgst": false, 00:20:53.289 "ddgst": false 00:20:53.289 }, 00:20:53.289 "method": "bdev_nvme_attach_controller" 00:20:53.289 },{ 00:20:53.289 "params": { 00:20:53.289 "name": "Nvme7", 00:20:53.289 "trtype": "tcp", 00:20:53.289 "traddr": "10.0.0.2", 00:20:53.289 "adrfam": "ipv4", 00:20:53.289 "trsvcid": "4420", 00:20:53.289 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:53.289 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:53.289 "hdgst": false, 00:20:53.289 "ddgst": false 00:20:53.289 }, 00:20:53.289 "method": "bdev_nvme_attach_controller" 00:20:53.289 },{ 00:20:53.289 "params": { 00:20:53.289 "name": "Nvme8", 00:20:53.289 "trtype": "tcp", 00:20:53.289 "traddr": "10.0.0.2", 00:20:53.289 "adrfam": "ipv4", 00:20:53.289 "trsvcid": "4420", 00:20:53.289 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:53.289 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:53.289 "hdgst": false, 00:20:53.289 "ddgst": false 00:20:53.289 }, 00:20:53.289 "method": "bdev_nvme_attach_controller" 00:20:53.289 },{ 00:20:53.289 "params": { 00:20:53.289 "name": "Nvme9", 00:20:53.289 "trtype": "tcp", 00:20:53.289 "traddr": "10.0.0.2", 00:20:53.289 "adrfam": "ipv4", 00:20:53.289 "trsvcid": "4420", 00:20:53.289 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:53.289 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:53.289 "hdgst": false, 00:20:53.289 "ddgst": false 00:20:53.289 }, 00:20:53.289 "method": "bdev_nvme_attach_controller" 00:20:53.289 },{ 00:20:53.289 "params": { 00:20:53.289 "name": "Nvme10", 00:20:53.289 "trtype": "tcp", 00:20:53.289 "traddr": "10.0.0.2", 00:20:53.289 "adrfam": "ipv4", 00:20:53.289 "trsvcid": "4420", 00:20:53.289 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:53.289 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:53.289 "hdgst": false, 00:20:53.289 "ddgst": false 00:20:53.289 }, 00:20:53.289 "method": "bdev_nvme_attach_controller" 00:20:53.289 }' 00:20:53.289 [2024-07-15 17:02:59.744604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.289 [2024-07-15 17:02:59.818623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.713 Running I/O for 10 seconds... 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:54.713 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:54.971 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:54.971 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:54.971 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:54.971 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:54.971 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.971 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.971 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.230 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=72 00:20:55.230 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 72 -ge 100 ']' 00:20:55.230 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=199 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 199 -ge 100 ']' 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 142735 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 142735 ']' 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 142735 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:55.490 17:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 142735 00:20:55.490 17:03:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:55.490 17:03:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:55.490 17:03:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 142735' 00:20:55.490 killing process with pid 142735 00:20:55.490 17:03:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 142735 00:20:55.490 17:03:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 142735 00:20:55.490 Received shutdown signal, test time was about 0.908510 seconds 00:20:55.490 00:20:55.490 Latency(us) 00:20:55.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.490 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.490 Verification LBA range: start 0x0 length 0x400 00:20:55.490 Nvme1n1 : 0.90 290.46 18.15 0.00 0.00 216983.78 5698.78 219745.06 00:20:55.490 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.490 Verification LBA range: start 0x0 length 0x400 00:20:55.490 Nvme2n1 : 0.89 312.11 19.51 0.00 0.00 196524.21 5299.87 216097.84 00:20:55.490 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.490 Verification LBA range: start 0x0 length 0x400 00:20:55.490 Nvme3n1 : 0.89 288.98 18.06 0.00 0.00 211051.07 15272.74 215186.03 00:20:55.490 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.490 Verification LBA range: start 0x0 length 0x400 00:20:55.490 Nvme4n1 : 0.89 286.76 17.92 0.00 0.00 208753.20 14531.90 228863.11 00:20:55.490 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.490 Verification LBA range: start 0x0 length 0x400 00:20:55.490 Nvme5n1 : 0.91 282.20 17.64 0.00 0.00 207865.54 16754.42 221568.67 00:20:55.490 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.490 Verification LBA range: start 0x0 length 0x400 00:20:55.490 Nvme6n1 : 0.90 284.53 17.78 0.00 0.00 202679.43 16298.52 216097.84 00:20:55.490 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.490 Verification LBA range: start 0x0 length 0x400 00:20:55.490 Nvme7n1 : 0.91 281.99 17.62 0.00 0.00 200633.88 18236.10 220656.86 00:20:55.490 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.490 Verification LBA range: start 0x0 length 0x400 00:20:55.490 Nvme8n1 : 0.90 283.78 17.74 0.00 0.00 195436.86 15044.79 197861.73 00:20:55.490 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.490 Verification LBA range: start 0x0 length 0x400 00:20:55.490 Nvme9n1 : 0.88 217.92 13.62 0.00 0.00 246893.23 19489.84 220656.86 00:20:55.490 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.490 Verification LBA range: start 0x0 length 0x400 00:20:55.490 Nvme10n1 : 0.88 226.41 14.15 0.00 0.00 231974.17 4758.48 238892.97 00:20:55.490 =================================================================================================================== 00:20:55.490 Total : 2755.14 172.20 0.00 0.00 210377.03 4758.48 238892.97 00:20:55.749 17:03:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 142458 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.686 rmmod nvme_tcp 00:20:56.686 rmmod nvme_fabrics 00:20:56.686 rmmod nvme_keyring 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 142458 ']' 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 142458 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 142458 ']' 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 142458 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.686 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 142458 00:20:56.945 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:56.945 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:56.945 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 142458' 00:20:56.945 killing process with pid 142458 00:20:56.945 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 142458 00:20:56.945 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 142458 00:20:57.203 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.203 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.203 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.203 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.203 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.203 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.203 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.203 17:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:59.736 00:20:59.736 real 0m7.881s 00:20:59.736 user 0m23.847s 00:20:59.736 sys 0m1.345s 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.736 ************************************ 00:20:59.736 END TEST nvmf_shutdown_tc2 00:20:59.736 ************************************ 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:59.736 ************************************ 00:20:59.736 START TEST nvmf_shutdown_tc3 00:20:59.736 ************************************ 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:59.736 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:59.737 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:59.737 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:59.737 Found net devices under 0000:86:00.0: cvl_0_0 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:59.737 Found net devices under 0000:86:00.1: cvl_0_1 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.737 17:03:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:59.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:20:59.737 00:20:59.737 --- 10.0.0.2 ping statistics --- 00:20:59.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.737 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:20:59.737 00:20:59.737 --- 10.0.0.1 ping statistics --- 00:20:59.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.737 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=144117 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 144117 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 144117 ']' 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.737 17:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.737 [2024-07-15 17:03:06.269806] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:20:59.737 [2024-07-15 17:03:06.269849] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.738 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.738 [2024-07-15 17:03:06.327874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:59.738 [2024-07-15 17:03:06.399846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.738 [2024-07-15 17:03:06.399889] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.738 [2024-07-15 17:03:06.399896] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.738 [2024-07-15 17:03:06.399902] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.738 [2024-07-15 17:03:06.399907] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.738 [2024-07-15 17:03:06.400007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.738 [2024-07-15 17:03:06.400075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.738 [2024-07-15 17:03:06.400162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.738 [2024-07-15 17:03:06.400163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.676 [2024-07-15 17:03:07.114165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.676 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.676 Malloc1 00:21:00.676 [2024-07-15 17:03:07.205913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.676 Malloc2 00:21:00.676 Malloc3 00:21:00.676 Malloc4 00:21:00.935 Malloc5 00:21:00.935 Malloc6 00:21:00.935 Malloc7 00:21:00.935 Malloc8 00:21:00.935 Malloc9 00:21:00.935 Malloc10 00:21:00.935 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.935 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:00.935 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:00.935 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=144404 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 144404 /var/tmp/bdevperf.sock 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 144404 ']' 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.195 { 00:21:01.195 "params": { 00:21:01.195 "name": "Nvme$subsystem", 00:21:01.195 "trtype": "$TEST_TRANSPORT", 00:21:01.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.195 "adrfam": "ipv4", 00:21:01.195 "trsvcid": "$NVMF_PORT", 00:21:01.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.195 "hdgst": ${hdgst:-false}, 00:21:01.195 "ddgst": ${ddgst:-false} 00:21:01.195 }, 00:21:01.195 "method": "bdev_nvme_attach_controller" 00:21:01.195 } 00:21:01.195 EOF 00:21:01.195 )") 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.195 { 00:21:01.195 "params": { 00:21:01.195 "name": "Nvme$subsystem", 00:21:01.195 "trtype": "$TEST_TRANSPORT", 00:21:01.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.195 "adrfam": "ipv4", 00:21:01.195 "trsvcid": "$NVMF_PORT", 00:21:01.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.195 "hdgst": ${hdgst:-false}, 00:21:01.195 "ddgst": ${ddgst:-false} 00:21:01.195 }, 00:21:01.195 "method": "bdev_nvme_attach_controller" 00:21:01.195 } 00:21:01.195 EOF 00:21:01.195 )") 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.195 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.195 { 00:21:01.195 "params": { 00:21:01.195 "name": "Nvme$subsystem", 00:21:01.195 "trtype": "$TEST_TRANSPORT", 00:21:01.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.195 "adrfam": "ipv4", 00:21:01.195 "trsvcid": "$NVMF_PORT", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.196 "hdgst": ${hdgst:-false}, 00:21:01.196 "ddgst": ${ddgst:-false} 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 } 00:21:01.196 EOF 00:21:01.196 )") 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.196 { 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme$subsystem", 00:21:01.196 "trtype": "$TEST_TRANSPORT", 00:21:01.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "$NVMF_PORT", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.196 "hdgst": ${hdgst:-false}, 00:21:01.196 "ddgst": ${ddgst:-false} 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 } 00:21:01.196 EOF 00:21:01.196 )") 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.196 { 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme$subsystem", 00:21:01.196 "trtype": "$TEST_TRANSPORT", 00:21:01.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "$NVMF_PORT", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.196 "hdgst": ${hdgst:-false}, 00:21:01.196 "ddgst": ${ddgst:-false} 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 } 00:21:01.196 EOF 00:21:01.196 )") 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.196 { 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme$subsystem", 00:21:01.196 "trtype": "$TEST_TRANSPORT", 00:21:01.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "$NVMF_PORT", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.196 "hdgst": ${hdgst:-false}, 00:21:01.196 "ddgst": ${ddgst:-false} 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 } 00:21:01.196 EOF 00:21:01.196 )") 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.196 { 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme$subsystem", 00:21:01.196 "trtype": "$TEST_TRANSPORT", 00:21:01.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "$NVMF_PORT", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.196 "hdgst": ${hdgst:-false}, 00:21:01.196 "ddgst": ${ddgst:-false} 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 } 00:21:01.196 EOF 00:21:01.196 )") 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.196 [2024-07-15 17:03:07.681232] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:21:01.196 [2024-07-15 17:03:07.681283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144404 ] 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.196 { 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme$subsystem", 00:21:01.196 "trtype": "$TEST_TRANSPORT", 00:21:01.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "$NVMF_PORT", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.196 "hdgst": ${hdgst:-false}, 00:21:01.196 "ddgst": ${ddgst:-false} 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 } 00:21:01.196 EOF 00:21:01.196 )") 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.196 { 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme$subsystem", 00:21:01.196 "trtype": "$TEST_TRANSPORT", 00:21:01.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "$NVMF_PORT", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.196 "hdgst": ${hdgst:-false}, 00:21:01.196 "ddgst": ${ddgst:-false} 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 } 00:21:01.196 EOF 00:21:01.196 )") 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.196 { 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme$subsystem", 00:21:01.196 "trtype": "$TEST_TRANSPORT", 00:21:01.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "$NVMF_PORT", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.196 "hdgst": ${hdgst:-false}, 00:21:01.196 "ddgst": ${ddgst:-false} 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 } 00:21:01.196 EOF 00:21:01.196 )") 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:01.196 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:01.196 17:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme1", 00:21:01.196 "trtype": "tcp", 00:21:01.196 "traddr": "10.0.0.2", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "4420", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.196 "hdgst": false, 00:21:01.196 "ddgst": false 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 },{ 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme2", 00:21:01.196 "trtype": "tcp", 00:21:01.196 "traddr": "10.0.0.2", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "4420", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:01.196 "hdgst": false, 00:21:01.196 "ddgst": false 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 },{ 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme3", 00:21:01.196 "trtype": "tcp", 00:21:01.196 "traddr": "10.0.0.2", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "4420", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:01.196 "hdgst": false, 00:21:01.196 "ddgst": false 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 },{ 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme4", 00:21:01.196 "trtype": "tcp", 00:21:01.196 "traddr": "10.0.0.2", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "4420", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:01.196 "hdgst": false, 00:21:01.196 "ddgst": false 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 },{ 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme5", 00:21:01.196 "trtype": "tcp", 00:21:01.196 "traddr": "10.0.0.2", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "4420", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:01.196 "hdgst": false, 00:21:01.196 "ddgst": false 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 },{ 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme6", 00:21:01.196 "trtype": "tcp", 00:21:01.196 "traddr": "10.0.0.2", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "4420", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:01.196 "hdgst": false, 00:21:01.196 "ddgst": false 00:21:01.196 }, 00:21:01.196 "method": "bdev_nvme_attach_controller" 00:21:01.196 },{ 00:21:01.196 "params": { 00:21:01.196 "name": "Nvme7", 00:21:01.196 "trtype": "tcp", 00:21:01.196 "traddr": "10.0.0.2", 00:21:01.196 "adrfam": "ipv4", 00:21:01.196 "trsvcid": "4420", 00:21:01.196 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:01.196 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:01.196 "hdgst": false, 00:21:01.197 "ddgst": false 00:21:01.197 }, 00:21:01.197 "method": "bdev_nvme_attach_controller" 00:21:01.197 },{ 00:21:01.197 "params": { 00:21:01.197 "name": "Nvme8", 00:21:01.197 "trtype": "tcp", 00:21:01.197 "traddr": "10.0.0.2", 00:21:01.197 "adrfam": "ipv4", 00:21:01.197 "trsvcid": "4420", 00:21:01.197 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:01.197 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:01.197 "hdgst": false, 00:21:01.197 "ddgst": false 00:21:01.197 }, 00:21:01.197 "method": "bdev_nvme_attach_controller" 00:21:01.197 },{ 00:21:01.197 "params": { 00:21:01.197 "name": "Nvme9", 00:21:01.197 "trtype": "tcp", 00:21:01.197 "traddr": "10.0.0.2", 00:21:01.197 "adrfam": "ipv4", 00:21:01.197 "trsvcid": "4420", 00:21:01.197 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:01.197 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:01.197 "hdgst": false, 00:21:01.197 "ddgst": false 00:21:01.197 }, 00:21:01.197 "method": "bdev_nvme_attach_controller" 00:21:01.197 },{ 00:21:01.197 "params": { 00:21:01.197 "name": "Nvme10", 00:21:01.197 "trtype": "tcp", 00:21:01.197 "traddr": "10.0.0.2", 00:21:01.197 "adrfam": "ipv4", 00:21:01.197 "trsvcid": "4420", 00:21:01.197 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:01.197 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:01.197 "hdgst": false, 00:21:01.197 "ddgst": false 00:21:01.197 }, 00:21:01.197 "method": "bdev_nvme_attach_controller" 00:21:01.197 }' 00:21:01.197 [2024-07-15 17:03:07.736374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.197 [2024-07-15 17:03:07.811924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.576 Running I/O for 10 seconds... 00:21:02.576 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.576 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:02.576 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:02.576 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.576 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:02.835 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:03.096 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:03.096 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:03.096 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:03.096 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:03.096 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.096 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.096 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.096 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:03.096 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:03.096 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 144117 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 144117 ']' 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 144117 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:03.356 17:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144117 00:21:03.631 17:03:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:03.631 17:03:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:03.631 17:03:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144117' 00:21:03.631 killing process with pid 144117 00:21:03.631 17:03:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 144117 00:21:03.631 17:03:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 144117 00:21:03.631 [2024-07-15 17:03:10.030543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.030996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.031003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.031012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.031021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.031028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.031035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.031041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.631 [2024-07-15 17:03:10.031048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.031055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.031062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.031069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.031076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.031083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0430 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.032100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593a00 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.032132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593a00 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.032141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593a00 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.032147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593a00 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.032155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593a00 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.032165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593a00 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.032172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593a00 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.032179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593a00 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.032185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593a00 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.033598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b08d0 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.632 [2024-07-15 17:03:10.034862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.034999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.035191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0d70 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-15 17:03:10.036470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:03.633 the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with [2024-07-15 17:03:10.036485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:21:03.633 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.633 [2024-07-15 17:03:10.036496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.633 [2024-07-15 17:03:10.036505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.633 [2024-07-15 17:03:10.036514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.633 [2024-07-15 17:03:10.036518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.633 [2024-07-15 17:03:10.036522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 17:03:10.036544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with [2024-07-15 17:03:10.036554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe878b0 is same wthe state(5) to be set 00:21:03.634 ith the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 17:03:10.036607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with [2024-07-15 17:03:10.036616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:21:03.634 id:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f1d0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 17:03:10.036711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with [2024-07-15 17:03:10.036719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:21:03.634 id:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1230 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e8d0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7050 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.036886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.634 [2024-07-15 17:03:10.036942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.634 [2024-07-15 17:03:10.036949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd2c70 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.634 [2024-07-15 17:03:10.037506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.037710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16f0 is same with the state(5) to be set 00:21:03.635 [2024-07-15 17:03:10.038907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.038933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.038949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.038956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.038974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.038981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.038990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.038997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.635 [2024-07-15 17:03:10.039218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.635 [2024-07-15 17:03:10.039234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.636 [2024-07-15 17:03:10.039884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.636 [2024-07-15 17:03:10.039890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.637 [2024-07-15 17:03:10.039899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.637 [2024-07-15 17:03:10.039906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.637 [2024-07-15 17:03:10.039916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.637 [2024-07-15 17:03:10.039923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.637 [2024-07-15 17:03:10.039931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.637 [2024-07-15 17:03:10.039938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.637 [2024-07-15 17:03:10.040355] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdfcef0 was disconnected and freed. reset controller. 00:21:03.637 [2024-07-15 17:03:10.043003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.043337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1b90 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.637 [2024-07-15 17:03:10.044269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2030 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.044830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:03.638 [2024-07-15 17:03:10.044865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe878b0 (9): Bad file descriptor 00:21:03.638 [2024-07-15 17:03:10.045153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.638 [2024-07-15 17:03:10.045173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.638 [2024-07-15 17:03:10.045186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.638 [2024-07-15 17:03:10.045193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.638 [2024-07-15 17:03:10.045202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.638 [2024-07-15 17:03:10.045210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.638 [2024-07-15 17:03:10.045218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.638 [2024-07-15 17:03:10.045230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.638 [2024-07-15 17:03:10.045240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.638 [2024-07-15 17:03:10.045246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.638 [2024-07-15 17:03:10.045255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.638 [2024-07-15 17:03:10.045262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.638 [2024-07-15 17:03:10.045270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.638 [2024-07-15 17:03:10.045277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.638 [2024-07-15 17:03:10.045285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.638 [2024-07-15 17:03:10.045292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.638 [2024-07-15 17:03:10.045300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.638 [2024-07-15 17:03:10.045297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.045309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.638 [2024-07-15 17:03:10.045313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.045318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-07-15 17:03:10.045320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.638 the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.045331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with [2024-07-15 17:03:10.045331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:03.638 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.638 [2024-07-15 17:03:10.045341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.045343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.638 [2024-07-15 17:03:10.045348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.045351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.638 [2024-07-15 17:03:10.045356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.045361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.638 [2024-07-15 17:03:10.045363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.045368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.638 [2024-07-15 17:03:10.045370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.045378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with [2024-07-15 17:03:10.045377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:12the state(5) to be set 00:21:03.638 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.638 [2024-07-15 17:03:10.045388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.638 [2024-07-15 17:03:10.045389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.638 [2024-07-15 17:03:10.045395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with [2024-07-15 17:03:10.045434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:12the state(5) to be set 00:21:03.639 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 17:03:10.045487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 17:03:10.045548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:12[2024-07-15 17:03:10.045594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with [2024-07-15 17:03:10.045603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:03.639 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:12[2024-07-15 17:03:10.045633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with [2024-07-15 17:03:10.045642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:03.639 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with [2024-07-15 17:03:10.045689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12the state(5) to be set 00:21:03.639 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 [2024-07-15 17:03:10.045729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:12[2024-07-15 17:03:10.045744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.639 the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with [2024-07-15 17:03:10.045753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:03.639 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.639 [2024-07-15 17:03:10.045762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.639 [2024-07-15 17:03:10.045765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.640 [2024-07-15 17:03:10.045773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.640 [2024-07-15 17:03:10.045782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.640 [2024-07-15 17:03:10.045790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b24d0 is same with the state(5) to be set 00:21:03.640 [2024-07-15 17:03:10.045800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.045986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.045993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.046217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.046382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2970 is same with the state(5) to be set 00:21:03.640 [2024-07-15 17:03:10.046409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2970 is same with the state(5) to be set 00:21:03.640 [2024-07-15 17:03:10.046415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2970 is same with the state(5) to be set 00:21:03.640 [2024-07-15 17:03:10.046422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2970 is same with the state(5) to be set 00:21:03.640 [2024-07-15 17:03:10.058941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.058979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.058989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.059000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.059008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.059111] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe14910 was disconnected and freed. reset controller. 00:21:03.640 [2024-07-15 17:03:10.059575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.059599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.059616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.059625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.059635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.059650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.059659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.059668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.059678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.059686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.059696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.059704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.640 [2024-07-15 17:03:10.059714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.640 [2024-07-15 17:03:10.059722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.059982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.059992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.641 [2024-07-15 17:03:10.060515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.641 [2024-07-15 17:03:10.060523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.642 [2024-07-15 17:03:10.060798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:03.642 [2024-07-15 17:03:10.060883] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdfba60 was disconnected and freed. reset controller. 00:21:03.642 [2024-07-15 17:03:10.060968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.060981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.060990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.060997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821340 is same with the state(5) to be set 00:21:03.642 [2024-07-15 17:03:10.061071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19bf0 is same with the state(5) to be set 00:21:03.642 [2024-07-15 17:03:10.061176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e0d0 is same with the state(5) to be set 00:21:03.642 [2024-07-15 17:03:10.061284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5190 is same with the state(5) to be set 00:21:03.642 [2024-07-15 17:03:10.061377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0f1d0 (9): Bad file descriptor 00:21:03.642 [2024-07-15 17:03:10.061406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.642 [2024-07-15 17:03:10.061469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.642 [2024-07-15 17:03:10.061478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16b30 is same with the state(5) to be set 00:21:03.643 [2024-07-15 17:03:10.061490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e8d0 (9): Bad file descriptor 00:21:03.643 [2024-07-15 17:03:10.061508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7050 (9): Bad file descriptor 00:21:03.643 [2024-07-15 17:03:10.061527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd2c70 (9): Bad file descriptor 00:21:03.643 [2024-07-15 17:03:10.061608] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:03.643 [2024-07-15 17:03:10.061671] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:03.643 [2024-07-15 17:03:10.061726] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:03.643 [2024-07-15 17:03:10.061784] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:03.643 [2024-07-15 17:03:10.064600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:03.643 [2024-07-15 17:03:10.064634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd16b30 (9): Bad file descriptor 00:21:03.643 [2024-07-15 17:03:10.064885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.643 [2024-07-15 17:03:10.064903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe878b0 with addr=10.0.0.2, port=4420 00:21:03.643 [2024-07-15 17:03:10.064915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe878b0 is same with the state(5) to be set 00:21:03.643 [2024-07-15 17:03:10.065016] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:03.643 [2024-07-15 17:03:10.065179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:03.643 [2024-07-15 17:03:10.065201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e0d0 (9): Bad file descriptor 00:21:03.643 [2024-07-15 17:03:10.065236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe878b0 (9): Bad file descriptor 00:21:03.643 [2024-07-15 17:03:10.065301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.065982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.065995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.066006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.066018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.066028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.066040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.643 [2024-07-15 17:03:10.066050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.643 [2024-07-15 17:03:10.066061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.644 [2024-07-15 17:03:10.066754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.644 [2024-07-15 17:03:10.066765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcce040 is same with the state(5) to be set 00:21:03.644 [2024-07-15 17:03:10.066842] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcce040 was disconnected and freed. reset controller. 00:21:03.644 [2024-07-15 17:03:10.067768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.644 [2024-07-15 17:03:10.067790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd16b30 with addr=10.0.0.2, port=4420 00:21:03.644 [2024-07-15 17:03:10.067801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16b30 is same with the state(5) to be set 00:21:03.644 [2024-07-15 17:03:10.067826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:03.644 [2024-07-15 17:03:10.067836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:03.644 [2024-07-15 17:03:10.067847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:03.644 [2024-07-15 17:03:10.069241] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:03.644 [2024-07-15 17:03:10.069305] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:03.644 [2024-07-15 17:03:10.069332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.644 [2024-07-15 17:03:10.069345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:03.644 [2024-07-15 17:03:10.069361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x821340 (9): Bad file descriptor 00:21:03.644 [2024-07-15 17:03:10.069553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.644 [2024-07-15 17:03:10.069572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9e0d0 with addr=10.0.0.2, port=4420 00:21:03.644 [2024-07-15 17:03:10.069583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e0d0 is same with the state(5) to be set 00:21:03.644 [2024-07-15 17:03:10.069596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd16b30 (9): Bad file descriptor 00:21:03.644 [2024-07-15 17:03:10.069708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e0d0 (9): Bad file descriptor 00:21:03.644 [2024-07-15 17:03:10.069729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:03.644 [2024-07-15 17:03:10.069739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:03.644 [2024-07-15 17:03:10.069749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:03.644 [2024-07-15 17:03:10.070096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.644 [2024-07-15 17:03:10.070218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.644 [2024-07-15 17:03:10.070242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x821340 with addr=10.0.0.2, port=4420 00:21:03.644 [2024-07-15 17:03:10.070254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821340 is same with the state(5) to be set 00:21:03.644 [2024-07-15 17:03:10.070266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:03.644 [2024-07-15 17:03:10.070276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:03.644 [2024-07-15 17:03:10.070286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:03.645 [2024-07-15 17:03:10.070340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.645 [2024-07-15 17:03:10.070353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x821340 (9): Bad file descriptor 00:21:03.645 [2024-07-15 17:03:10.070402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:03.645 [2024-07-15 17:03:10.070414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:03.645 [2024-07-15 17:03:10.070424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:03.645 [2024-07-15 17:03:10.070468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.645 [2024-07-15 17:03:10.070953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd19bf0 (9): Bad file descriptor 00:21:03.645 [2024-07-15 17:03:10.070978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5190 (9): Bad file descriptor 00:21:03.645 [2024-07-15 17:03:10.071120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.071981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.071995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.072008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.072019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.645 [2024-07-15 17:03:10.072031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.645 [2024-07-15 17:03:10.072042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.072620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.072631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe13ea0 is same with the state(5) to be set 00:21:03.646 [2024-07-15 17:03:10.073910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.073924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.073936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.073944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.073954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.073961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.073970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.073980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.073990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.073997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.074006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.074013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.074022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.074029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.074038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.074045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.074054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.074061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.074070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.074077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.074086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.074093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.074102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.074110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.074118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.074126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.074135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.074142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.074151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.074158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.074168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.646 [2024-07-15 17:03:10.074174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.646 [2024-07-15 17:03:10.074185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.647 [2024-07-15 17:03:10.074767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.647 [2024-07-15 17:03:10.074775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.074782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.074791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.074801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.074809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.074816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.074825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.074832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.074841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.074848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.074857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.074863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.074873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.074880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.074889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.074896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.074907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.074914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.074923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.074930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.074938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.074947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.074956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.074963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.074972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9a490 is same with the state(5) to be set 00:21:03.648 [2024-07-15 17:03:10.075973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.075986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.075997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.648 [2024-07-15 17:03:10.076475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.648 [2024-07-15 17:03:10.076484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.076986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.076993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.077001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.077009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.077018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.077026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.077034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.077043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.077054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b920 is same with the state(5) to be set 00:21:03.649 [2024-07-15 17:03:10.078061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.078076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.078087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.078095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.078105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.078112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.078122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.078129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.078138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.078146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.078155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.078164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.649 [2024-07-15 17:03:10.078172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.649 [2024-07-15 17:03:10.078180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.650 [2024-07-15 17:03:10.078891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.650 [2024-07-15 17:03:10.078900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.078908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.078917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.078925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.078934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.078941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.078949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.078957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.078966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.078975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.078984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.078992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.079001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.079008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.079017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.079026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.079035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.079041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.079050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.079057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.079067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.079073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.079083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.079090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.079099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.079106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.079115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.079124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.079133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.651 [2024-07-15 17:03:10.079141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.651 [2024-07-15 17:03:10.079149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcccb70 is same with the state(5) to be set 00:21:03.652 [2024-07-15 17:03:10.080135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.652 [2024-07-15 17:03:10.080152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:03.652 [2024-07-15 17:03:10.080161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:03.652 [2024-07-15 17:03:10.080264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:03.652 [2024-07-15 17:03:10.080277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:03.652 [2024-07-15 17:03:10.080493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.652 [2024-07-15 17:03:10.080509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd2c70 with addr=10.0.0.2, port=4420 00:21:03.652 [2024-07-15 17:03:10.080518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd2c70 is same with the state(5) to be set 00:21:03.652 [2024-07-15 17:03:10.080677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.652 [2024-07-15 17:03:10.080690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9e8d0 with addr=10.0.0.2, port=4420 00:21:03.652 [2024-07-15 17:03:10.080698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e8d0 is same with the state(5) to be set 00:21:03.652 [2024-07-15 17:03:10.080810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.652 [2024-07-15 17:03:10.080822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7050 with addr=10.0.0.2, port=4420 00:21:03.652 [2024-07-15 17:03:10.080830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7050 is same with the state(5) to be set 00:21:03.652 [2024-07-15 17:03:10.081726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:03.652 [2024-07-15 17:03:10.081743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:03.652 [2024-07-15 17:03:10.081754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:03.652 [2024-07-15 17:03:10.081984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.652 [2024-07-15 17:03:10.081999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0f1d0 with addr=10.0.0.2, port=4420 00:21:03.652 [2024-07-15 17:03:10.082007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f1d0 is same with the state(5) to be set 00:21:03.652 [2024-07-15 17:03:10.082126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.652 [2024-07-15 17:03:10.082138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe878b0 with addr=10.0.0.2, port=4420 00:21:03.652 [2024-07-15 17:03:10.082146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe878b0 is same with the state(5) to be set 00:21:03.652 [2024-07-15 17:03:10.082157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd2c70 (9): Bad file descriptor 00:21:03.652 [2024-07-15 17:03:10.082168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e8d0 (9): Bad file descriptor 00:21:03.652 [2024-07-15 17:03:10.082182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7050 (9): Bad file descriptor 00:21:03.652 [2024-07-15 17:03:10.082375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.652 [2024-07-15 17:03:10.082390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd16b30 with addr=10.0.0.2, port=4420 00:21:03.652 [2024-07-15 17:03:10.082399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd16b30 is same with the state(5) to be set 00:21:03.652 [2024-07-15 17:03:10.082570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.652 [2024-07-15 17:03:10.082582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9e0d0 with addr=10.0.0.2, port=4420 00:21:03.652 [2024-07-15 17:03:10.082591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e0d0 is same with the state(5) to be set 00:21:03.652 [2024-07-15 17:03:10.082661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.652 [2024-07-15 17:03:10.082672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x821340 with addr=10.0.0.2, port=4420 00:21:03.652 [2024-07-15 17:03:10.082680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821340 is same with the state(5) to be set 00:21:03.652 [2024-07-15 17:03:10.082688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0f1d0 (9): Bad file descriptor 00:21:03.652 [2024-07-15 17:03:10.082697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe878b0 (9): Bad file descriptor 00:21:03.652 [2024-07-15 17:03:10.082707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.652 [2024-07-15 17:03:10.082713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.653 [2024-07-15 17:03:10.082722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.653 [2024-07-15 17:03:10.082732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:03.653 [2024-07-15 17:03:10.082740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:03.653 [2024-07-15 17:03:10.082747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:03.653 [2024-07-15 17:03:10.082756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:03.653 [2024-07-15 17:03:10.082763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:03.653 [2024-07-15 17:03:10.082771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:03.653 [2024-07-15 17:03:10.082842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.082852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.082865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.082873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.082882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.082891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.082899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.082911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.082920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.082927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.082936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.082945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.082954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.082961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.082970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.082977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.082987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.082994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.653 [2024-07-15 17:03:10.083498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.653 [2024-07-15 17:03:10.083506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.083915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.083923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe15de0 is same with the state(5) to be set 00:21:03.654 [2024-07-15 17:03:10.084927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.084941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.084952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.084960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.084973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.084981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.084990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.084998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.085007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.085015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.085024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.085031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.085041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.085048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.085057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.085065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.085074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.085081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.085090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.085097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.085106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.085112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.085122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.085129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.085137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.085145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.085153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.085162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.085170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.085179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.085189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.085197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.654 [2024-07-15 17:03:10.085208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.654 [2024-07-15 17:03:10.085215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.655 [2024-07-15 17:03:10.085917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.655 [2024-07-15 17:03:10.085925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.656 [2024-07-15 17:03:10.085933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.656 [2024-07-15 17:03:10.085941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.656 [2024-07-15 17:03:10.085948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.656 [2024-07-15 17:03:10.085957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.656 [2024-07-15 17:03:10.085964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.656 [2024-07-15 17:03:10.085973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.656 [2024-07-15 17:03:10.085979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.656 [2024-07-15 17:03:10.085988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.656 [2024-07-15 17:03:10.085995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.656 [2024-07-15 17:03:10.086005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe172b0 is same with the state(5) to be set 00:21:03.656 [2024-07-15 17:03:10.090120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.656 [2024-07-15 17:03:10.090143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.656 [2024-07-15 17:03:10.090150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.656 [2024-07-15 17:03:10.090158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:03.656 task offset: 25216 on job bdev=Nvme10n1 fails 00:21:03.656 00:21:03.656 Latency(us) 00:21:03.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.656 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.656 Job: Nvme1n1 ended in about 0.91 seconds with error 00:21:03.656 Verification LBA range: start 0x0 length 0x400 00:21:03.656 Nvme1n1 : 0.91 211.19 13.20 70.40 0.00 225026.00 16184.54 215186.03 00:21:03.656 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.656 Job: Nvme2n1 ended in about 0.91 seconds with error 00:21:03.656 Verification LBA range: start 0x0 length 0x400 00:21:03.656 Nvme2n1 : 0.91 210.67 13.17 70.22 0.00 221615.64 18122.13 217921.45 00:21:03.656 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.656 Job: Nvme3n1 ended in about 0.91 seconds with error 00:21:03.656 Verification LBA range: start 0x0 length 0x400 00:21:03.656 Nvme3n1 : 0.91 214.57 13.41 70.06 0.00 214822.41 5299.87 217921.45 00:21:03.656 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.656 Job: Nvme4n1 ended in about 0.92 seconds with error 00:21:03.656 Verification LBA range: start 0x0 length 0x400 00:21:03.656 Nvme4n1 : 0.92 209.71 13.11 69.90 0.00 214765.30 14360.93 229774.91 00:21:03.656 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.656 Job: Nvme5n1 ended in about 0.90 seconds with error 00:21:03.656 Verification LBA range: start 0x0 length 0x400 00:21:03.656 Nvme5n1 : 0.90 152.59 9.54 70.76 0.00 263689.72 17666.23 227951.30 00:21:03.656 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.656 Job: Nvme6n1 ended in about 0.90 seconds with error 00:21:03.656 Verification LBA range: start 0x0 length 0x400 00:21:03.656 Nvme6n1 : 0.90 213.73 13.36 71.24 0.00 202547.87 16982.37 219745.06 00:21:03.656 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.656 Job: Nvme7n1 ended in about 0.92 seconds with error 00:21:03.656 Verification LBA range: start 0x0 length 0x400 00:21:03.656 Nvme7n1 : 0.92 208.63 13.04 69.54 0.00 204125.05 15272.74 215186.03 00:21:03.656 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.656 Job: Nvme8n1 ended in about 0.92 seconds with error 00:21:03.656 Verification LBA range: start 0x0 length 0x400 00:21:03.656 Nvme8n1 : 0.92 208.16 13.01 69.39 0.00 200641.45 16412.49 214274.23 00:21:03.656 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.656 Job: Nvme9n1 ended in about 0.90 seconds with error 00:21:03.656 Verification LBA range: start 0x0 length 0x400 00:21:03.656 Nvme9n1 : 0.90 213.45 13.34 71.15 0.00 190948.79 3789.69 221568.67 00:21:03.656 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.656 Job: Nvme10n1 ended in about 0.88 seconds with error 00:21:03.656 Verification LBA range: start 0x0 length 0x400 00:21:03.656 Nvme10n1 : 0.88 218.09 13.63 72.70 0.00 182240.72 7693.36 237069.36 00:21:03.656 =================================================================================================================== 00:21:03.656 Total : 2060.78 128.80 705.37 0.00 210935.59 3789.69 237069.36 00:21:03.656 [2024-07-15 17:03:10.114534] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:03.656 [2024-07-15 17:03:10.114574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:03.656 [2024-07-15 17:03:10.114636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd16b30 (9): Bad file descriptor 00:21:03.656 [2024-07-15 17:03:10.114650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e0d0 (9): Bad file descriptor 00:21:03.656 [2024-07-15 17:03:10.114660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x821340 (9): Bad file descriptor 00:21:03.656 [2024-07-15 17:03:10.114668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:03.656 [2024-07-15 17:03:10.114675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:03.656 [2024-07-15 17:03:10.114682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:03.656 [2024-07-15 17:03:10.114696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:03.656 [2024-07-15 17:03:10.114702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:03.656 [2024-07-15 17:03:10.114709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:03.656 [2024-07-15 17:03:10.115023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.656 [2024-07-15 17:03:10.115037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.656 [2024-07-15 17:03:10.115268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.656 [2024-07-15 17:03:10.115285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5190 with addr=10.0.0.2, port=4420 00:21:03.656 [2024-07-15 17:03:10.115295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5190 is same with the state(5) to be set 00:21:03.656 [2024-07-15 17:03:10.115427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.656 [2024-07-15 17:03:10.115439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd19bf0 with addr=10.0.0.2, port=4420 00:21:03.656 [2024-07-15 17:03:10.115447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19bf0 is same with the state(5) to be set 00:21:03.656 [2024-07-15 17:03:10.115455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:03.656 [2024-07-15 17:03:10.115461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:03.656 [2024-07-15 17:03:10.115469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:03.656 [2024-07-15 17:03:10.115480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:03.656 [2024-07-15 17:03:10.115487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:03.656 [2024-07-15 17:03:10.115494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:03.656 [2024-07-15 17:03:10.115507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:03.656 [2024-07-15 17:03:10.115513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:03.656 [2024-07-15 17:03:10.115520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:03.656 [2024-07-15 17:03:10.115563] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:03.656 [2024-07-15 17:03:10.115574] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:03.656 [2024-07-15 17:03:10.115584] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:03.656 [2024-07-15 17:03:10.116072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.656 [2024-07-15 17:03:10.116088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.656 [2024-07-15 17:03:10.116095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.656 [2024-07-15 17:03:10.116117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5190 (9): Bad file descriptor 00:21:03.656 [2024-07-15 17:03:10.116128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd19bf0 (9): Bad file descriptor 00:21:03.656 [2024-07-15 17:03:10.116171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:03.656 [2024-07-15 17:03:10.116182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:03.657 [2024-07-15 17:03:10.116191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.657 [2024-07-15 17:03:10.116200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:03.657 [2024-07-15 17:03:10.116232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:03.657 [2024-07-15 17:03:10.116240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:03.657 [2024-07-15 17:03:10.116247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:03.657 [2024-07-15 17:03:10.116255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:03.657 [2024-07-15 17:03:10.116261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:03.657 [2024-07-15 17:03:10.116269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:03.657 [2024-07-15 17:03:10.116301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:03.657 [2024-07-15 17:03:10.116316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.657 [2024-07-15 17:03:10.116323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.657 [2024-07-15 17:03:10.116532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.657 [2024-07-15 17:03:10.116547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7050 with addr=10.0.0.2, port=4420 00:21:03.657 [2024-07-15 17:03:10.116556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7050 is same with the state(5) to be set 00:21:03.657 [2024-07-15 17:03:10.116727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.657 [2024-07-15 17:03:10.116746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9e8d0 with addr=10.0.0.2, port=4420 00:21:03.657 [2024-07-15 17:03:10.116753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e8d0 is same with the state(5) to be set 00:21:03.657 [2024-07-15 17:03:10.116878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.657 [2024-07-15 17:03:10.116890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd2c70 with addr=10.0.0.2, port=4420 00:21:03.657 [2024-07-15 17:03:10.116898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd2c70 is same with the state(5) to be set 00:21:03.657 [2024-07-15 17:03:10.117106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.657 [2024-07-15 17:03:10.117118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe878b0 with addr=10.0.0.2, port=4420 00:21:03.657 [2024-07-15 17:03:10.117125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe878b0 is same with the state(5) to be set 00:21:03.657 [2024-07-15 17:03:10.117273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.657 [2024-07-15 17:03:10.117287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0f1d0 with addr=10.0.0.2, port=4420 00:21:03.657 [2024-07-15 17:03:10.117298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f1d0 is same with the state(5) to be set 00:21:03.657 [2024-07-15 17:03:10.117308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7050 (9): Bad file descriptor 00:21:03.657 [2024-07-15 17:03:10.117317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e8d0 (9): Bad file descriptor 00:21:03.657 [2024-07-15 17:03:10.117325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd2c70 (9): Bad file descriptor 00:21:03.657 [2024-07-15 17:03:10.117334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe878b0 (9): Bad file descriptor 00:21:03.657 [2024-07-15 17:03:10.117358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0f1d0 (9): Bad file descriptor 00:21:03.657 [2024-07-15 17:03:10.117367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:03.657 [2024-07-15 17:03:10.117373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:03.657 [2024-07-15 17:03:10.117379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:03.657 [2024-07-15 17:03:10.117388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:03.657 [2024-07-15 17:03:10.117395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:03.657 [2024-07-15 17:03:10.117402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:03.657 [2024-07-15 17:03:10.117411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.657 [2024-07-15 17:03:10.117418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.657 [2024-07-15 17:03:10.117424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.657 [2024-07-15 17:03:10.117433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:03.657 [2024-07-15 17:03:10.117439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:03.657 [2024-07-15 17:03:10.117446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:03.657 [2024-07-15 17:03:10.117470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.657 [2024-07-15 17:03:10.117477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.657 [2024-07-15 17:03:10.117484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.657 [2024-07-15 17:03:10.117490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.657 [2024-07-15 17:03:10.117496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:03.657 [2024-07-15 17:03:10.117502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:03.657 [2024-07-15 17:03:10.117510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:03.657 [2024-07-15 17:03:10.117532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.917 17:03:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:03.917 17:03:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:04.854 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 144404 00:21:04.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (144404) - No such process 00:21:04.854 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:04.854 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:04.854 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:04.854 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:04.854 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:04.854 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:04.854 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:04.854 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:04.854 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:04.854 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:04.854 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:04.854 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:04.854 rmmod nvme_tcp 00:21:04.854 rmmod nvme_fabrics 00:21:04.854 rmmod nvme_keyring 00:21:05.114 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.114 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:05.114 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:05.114 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:05.114 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:05.114 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:05.114 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:05.114 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.114 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.114 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.114 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.114 17:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.016 17:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:07.016 00:21:07.016 real 0m7.705s 00:21:07.016 user 0m18.688s 00:21:07.016 sys 0m1.298s 00:21:07.016 17:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:07.016 17:03:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.016 ************************************ 00:21:07.016 END TEST nvmf_shutdown_tc3 00:21:07.016 ************************************ 00:21:07.016 17:03:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:07.016 17:03:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:07.016 00:21:07.016 real 0m31.000s 00:21:07.016 user 1m17.059s 00:21:07.016 sys 0m8.308s 00:21:07.016 17:03:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:07.016 17:03:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:07.016 ************************************ 00:21:07.016 END TEST nvmf_shutdown 00:21:07.016 ************************************ 00:21:07.275 17:03:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:07.275 17:03:13 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:07.275 17:03:13 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:07.275 17:03:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:07.275 17:03:13 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:07.275 17:03:13 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:07.275 17:03:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:07.275 17:03:13 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:07.275 17:03:13 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:07.275 17:03:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:07.275 17:03:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:07.275 17:03:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:07.275 ************************************ 00:21:07.275 START TEST nvmf_multicontroller 00:21:07.275 ************************************ 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:07.275 * Looking for test storage... 00:21:07.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.275 17:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:12.624 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:12.624 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:12.624 Found net devices under 0000:86:00.0: cvl_0_0 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:12.624 Found net devices under 0000:86:00.1: cvl_0_1 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:12.624 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:21:12.884 00:21:12.884 --- 10.0.0.2 ping statistics --- 00:21:12.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.884 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:21:12.884 00:21:12.884 --- 10.0.0.1 ping statistics --- 00:21:12.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.884 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=148841 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 148841 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 148841 ']' 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.884 17:03:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.885 17:03:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.885 17:03:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.885 17:03:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.885 [2024-07-15 17:03:19.437311] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:21:12.885 [2024-07-15 17:03:19.437354] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.885 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.885 [2024-07-15 17:03:19.495282] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:13.144 [2024-07-15 17:03:19.568820] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.144 [2024-07-15 17:03:19.568869] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.144 [2024-07-15 17:03:19.568879] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.144 [2024-07-15 17:03:19.568884] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.144 [2024-07-15 17:03:19.568889] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.144 [2024-07-15 17:03:19.568987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.144 [2024-07-15 17:03:19.569072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.144 [2024-07-15 17:03:19.569073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.710 [2024-07-15 17:03:20.289689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.710 Malloc0 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.710 [2024-07-15 17:03:20.356647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.710 [2024-07-15 17:03:20.364567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.710 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.968 Malloc1 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=149088 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 149088 /var/tmp/bdevperf.sock 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 149088 ']' 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.968 17:03:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.902 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.902 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:14.902 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:14.902 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.902 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.902 NVMe0n1 00:21:14.902 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.902 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:14.902 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:14.902 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.903 1 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.903 request: 00:21:14.903 { 00:21:14.903 "name": "NVMe0", 00:21:14.903 "trtype": "tcp", 00:21:14.903 "traddr": "10.0.0.2", 00:21:14.903 "adrfam": "ipv4", 00:21:14.903 "trsvcid": "4420", 00:21:14.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.903 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:14.903 "hostaddr": "10.0.0.2", 00:21:14.903 "hostsvcid": "60000", 00:21:14.903 "prchk_reftag": false, 00:21:14.903 "prchk_guard": false, 00:21:14.903 "hdgst": false, 00:21:14.903 "ddgst": false, 00:21:14.903 "method": "bdev_nvme_attach_controller", 00:21:14.903 "req_id": 1 00:21:14.903 } 00:21:14.903 Got JSON-RPC error response 00:21:14.903 response: 00:21:14.903 { 00:21:14.903 "code": -114, 00:21:14.903 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:14.903 } 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.903 request: 00:21:14.903 { 00:21:14.903 "name": "NVMe0", 00:21:14.903 "trtype": "tcp", 00:21:14.903 "traddr": "10.0.0.2", 00:21:14.903 "adrfam": "ipv4", 00:21:14.903 "trsvcid": "4420", 00:21:14.903 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:14.903 "hostaddr": "10.0.0.2", 00:21:14.903 "hostsvcid": "60000", 00:21:14.903 "prchk_reftag": false, 00:21:14.903 "prchk_guard": false, 00:21:14.903 "hdgst": false, 00:21:14.903 "ddgst": false, 00:21:14.903 "method": "bdev_nvme_attach_controller", 00:21:14.903 "req_id": 1 00:21:14.903 } 00:21:14.903 Got JSON-RPC error response 00:21:14.903 response: 00:21:14.903 { 00:21:14.903 "code": -114, 00:21:14.903 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:14.903 } 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.903 request: 00:21:14.903 { 00:21:14.903 "name": "NVMe0", 00:21:14.903 "trtype": "tcp", 00:21:14.903 "traddr": "10.0.0.2", 00:21:14.903 "adrfam": "ipv4", 00:21:14.903 "trsvcid": "4420", 00:21:14.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.903 "hostaddr": "10.0.0.2", 00:21:14.903 "hostsvcid": "60000", 00:21:14.903 "prchk_reftag": false, 00:21:14.903 "prchk_guard": false, 00:21:14.903 "hdgst": false, 00:21:14.903 "ddgst": false, 00:21:14.903 "multipath": "disable", 00:21:14.903 "method": "bdev_nvme_attach_controller", 00:21:14.903 "req_id": 1 00:21:14.903 } 00:21:14.903 Got JSON-RPC error response 00:21:14.903 response: 00:21:14.903 { 00:21:14.903 "code": -114, 00:21:14.903 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:14.903 } 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.903 request: 00:21:14.903 { 00:21:14.903 "name": "NVMe0", 00:21:14.903 "trtype": "tcp", 00:21:14.903 "traddr": "10.0.0.2", 00:21:14.903 "adrfam": "ipv4", 00:21:14.903 "trsvcid": "4420", 00:21:14.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.903 "hostaddr": "10.0.0.2", 00:21:14.903 "hostsvcid": "60000", 00:21:14.903 "prchk_reftag": false, 00:21:14.903 "prchk_guard": false, 00:21:14.903 "hdgst": false, 00:21:14.903 "ddgst": false, 00:21:14.903 "multipath": "failover", 00:21:14.903 "method": "bdev_nvme_attach_controller", 00:21:14.903 "req_id": 1 00:21:14.903 } 00:21:14.903 Got JSON-RPC error response 00:21:14.903 response: 00:21:14.903 { 00:21:14.903 "code": -114, 00:21:14.903 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:14.903 } 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:14.903 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.904 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.904 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.904 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:14.904 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.904 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:15.162 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:15.162 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:15.162 17:03:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:16.538 0 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 149088 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 149088 ']' 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 149088 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149088 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149088' 00:21:16.538 killing process with pid 149088 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 149088 00:21:16.538 17:03:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 149088 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:16.538 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:16.538 [2024-07-15 17:03:20.466604] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:21:16.538 [2024-07-15 17:03:20.466652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149088 ] 00:21:16.538 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.538 [2024-07-15 17:03:20.521349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.538 [2024-07-15 17:03:20.601976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.538 [2024-07-15 17:03:21.731916] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name cbc9330c-c686-488a-876e-bc6f1cbad8e0 already exists 00:21:16.538 [2024-07-15 17:03:21.731946] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:cbc9330c-c686-488a-876e-bc6f1cbad8e0 alias for bdev NVMe1n1 00:21:16.538 [2024-07-15 17:03:21.731954] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:16.538 Running I/O for 1 seconds... 00:21:16.538 00:21:16.538 Latency(us) 00:21:16.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.538 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:16.538 NVMe0n1 : 1.00 24702.64 96.49 0.00 0.00 5174.90 3034.60 11625.52 00:21:16.538 =================================================================================================================== 00:21:16.538 Total : 24702.64 96.49 0.00 0.00 5174.90 3034.60 11625.52 00:21:16.538 Received shutdown signal, test time was about 1.000000 seconds 00:21:16.538 00:21:16.538 Latency(us) 00:21:16.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.538 =================================================================================================================== 00:21:16.538 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.538 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:16.538 rmmod nvme_tcp 00:21:16.538 rmmod nvme_fabrics 00:21:16.538 rmmod nvme_keyring 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 148841 ']' 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 148841 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 148841 ']' 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 148841 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.538 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 148841 00:21:16.797 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:16.797 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:16.797 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 148841' 00:21:16.797 killing process with pid 148841 00:21:16.797 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 148841 00:21:16.797 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 148841 00:21:16.797 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:16.797 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:16.797 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:16.797 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:16.797 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:16.797 17:03:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.797 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.797 17:03:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.333 17:03:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:19.333 00:21:19.333 real 0m11.757s 00:21:19.333 user 0m16.059s 00:21:19.333 sys 0m4.952s 00:21:19.333 17:03:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:19.333 17:03:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:19.333 ************************************ 00:21:19.333 END TEST nvmf_multicontroller 00:21:19.333 ************************************ 00:21:19.333 17:03:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:19.333 17:03:25 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:19.333 17:03:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:19.333 17:03:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:19.333 17:03:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:19.333 ************************************ 00:21:19.333 START TEST nvmf_aer 00:21:19.333 ************************************ 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:19.333 * Looking for test storage... 00:21:19.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.333 17:03:25 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:19.334 17:03:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.608 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.608 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:24.608 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:24.608 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:24.608 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:24.608 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:24.608 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:24.608 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:24.608 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:24.608 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:24.608 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:24.608 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:24.609 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:24.609 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:24.609 Found net devices under 0000:86:00.0: cvl_0_0 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:24.609 Found net devices under 0000:86:00.1: cvl_0_1 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:24.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:21:24.609 00:21:24.609 --- 10.0.0.2 ping statistics --- 00:21:24.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.609 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:21:24.609 00:21:24.609 --- 10.0.0.1 ping statistics --- 00:21:24.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.609 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=152857 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 152857 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 152857 ']' 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.609 17:03:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:24.609 [2024-07-15 17:03:30.717001] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:21:24.609 [2024-07-15 17:03:30.717045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.609 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.609 [2024-07-15 17:03:30.772969] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.609 [2024-07-15 17:03:30.853571] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.609 [2024-07-15 17:03:30.853605] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.609 [2024-07-15 17:03:30.853612] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.609 [2024-07-15 17:03:30.853619] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.609 [2024-07-15 17:03:30.853624] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.609 [2024-07-15 17:03:30.853659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.609 [2024-07-15 17:03:30.853675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.609 [2024-07-15 17:03:30.853766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.609 [2024-07-15 17:03:30.853767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.869 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.869 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:24.869 17:03:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:24.869 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:24.869 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.128 [2024-07-15 17:03:31.575129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.128 Malloc0 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.128 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.129 [2024-07-15 17:03:31.626988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.129 [ 00:21:25.129 { 00:21:25.129 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:25.129 "subtype": "Discovery", 00:21:25.129 "listen_addresses": [], 00:21:25.129 "allow_any_host": true, 00:21:25.129 "hosts": [] 00:21:25.129 }, 00:21:25.129 { 00:21:25.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.129 "subtype": "NVMe", 00:21:25.129 "listen_addresses": [ 00:21:25.129 { 00:21:25.129 "trtype": "TCP", 00:21:25.129 "adrfam": "IPv4", 00:21:25.129 "traddr": "10.0.0.2", 00:21:25.129 "trsvcid": "4420" 00:21:25.129 } 00:21:25.129 ], 00:21:25.129 "allow_any_host": true, 00:21:25.129 "hosts": [], 00:21:25.129 "serial_number": "SPDK00000000000001", 00:21:25.129 "model_number": "SPDK bdev Controller", 00:21:25.129 "max_namespaces": 2, 00:21:25.129 "min_cntlid": 1, 00:21:25.129 "max_cntlid": 65519, 00:21:25.129 "namespaces": [ 00:21:25.129 { 00:21:25.129 "nsid": 1, 00:21:25.129 "bdev_name": "Malloc0", 00:21:25.129 "name": "Malloc0", 00:21:25.129 "nguid": "178ADB6B5CCE45C38E94CF049C954BE8", 00:21:25.129 "uuid": "178adb6b-5cce-45c3-8e94-cf049c954be8" 00:21:25.129 } 00:21:25.129 ] 00:21:25.129 } 00:21:25.129 ] 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=153104 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:25.129 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:25.129 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.389 Malloc1 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.389 Asynchronous Event Request test 00:21:25.389 Attaching to 10.0.0.2 00:21:25.389 Attached to 10.0.0.2 00:21:25.389 Registering asynchronous event callbacks... 00:21:25.389 Starting namespace attribute notice tests for all controllers... 00:21:25.389 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:25.389 aer_cb - Changed Namespace 00:21:25.389 Cleaning up... 00:21:25.389 [ 00:21:25.389 { 00:21:25.389 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:25.389 "subtype": "Discovery", 00:21:25.389 "listen_addresses": [], 00:21:25.389 "allow_any_host": true, 00:21:25.389 "hosts": [] 00:21:25.389 }, 00:21:25.389 { 00:21:25.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.389 "subtype": "NVMe", 00:21:25.389 "listen_addresses": [ 00:21:25.389 { 00:21:25.389 "trtype": "TCP", 00:21:25.389 "adrfam": "IPv4", 00:21:25.389 "traddr": "10.0.0.2", 00:21:25.389 "trsvcid": "4420" 00:21:25.389 } 00:21:25.389 ], 00:21:25.389 "allow_any_host": true, 00:21:25.389 "hosts": [], 00:21:25.389 "serial_number": "SPDK00000000000001", 00:21:25.389 "model_number": "SPDK bdev Controller", 00:21:25.389 "max_namespaces": 2, 00:21:25.389 "min_cntlid": 1, 00:21:25.389 "max_cntlid": 65519, 00:21:25.389 "namespaces": [ 00:21:25.389 { 00:21:25.389 "nsid": 1, 00:21:25.389 "bdev_name": "Malloc0", 00:21:25.389 "name": "Malloc0", 00:21:25.389 "nguid": "178ADB6B5CCE45C38E94CF049C954BE8", 00:21:25.389 "uuid": "178adb6b-5cce-45c3-8e94-cf049c954be8" 00:21:25.389 }, 00:21:25.389 { 00:21:25.389 "nsid": 2, 00:21:25.389 "bdev_name": "Malloc1", 00:21:25.389 "name": "Malloc1", 00:21:25.389 "nguid": "05711BE8E81241EAA7AA4BF9D598E892", 00:21:25.389 "uuid": "05711be8-e812-41ea-a7aa-4bf9d598e892" 00:21:25.389 } 00:21:25.389 ] 00:21:25.389 } 00:21:25.389 ] 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 153104 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:25.389 17:03:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:25.389 rmmod nvme_tcp 00:21:25.389 rmmod nvme_fabrics 00:21:25.389 rmmod nvme_keyring 00:21:25.389 17:03:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:25.389 17:03:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:25.389 17:03:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:25.389 17:03:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 152857 ']' 00:21:25.389 17:03:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 152857 00:21:25.389 17:03:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 152857 ']' 00:21:25.389 17:03:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 152857 00:21:25.389 17:03:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:25.649 17:03:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.649 17:03:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 152857 00:21:25.649 17:03:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:25.649 17:03:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:25.649 17:03:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 152857' 00:21:25.649 killing process with pid 152857 00:21:25.649 17:03:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 152857 00:21:25.649 17:03:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 152857 00:21:25.649 17:03:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:25.650 17:03:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:25.650 17:03:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:25.650 17:03:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.650 17:03:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.650 17:03:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.650 17:03:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.650 17:03:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.185 17:03:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:28.185 00:21:28.185 real 0m8.743s 00:21:28.185 user 0m7.016s 00:21:28.185 sys 0m4.241s 00:21:28.185 17:03:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:28.185 17:03:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.185 ************************************ 00:21:28.185 END TEST nvmf_aer 00:21:28.185 ************************************ 00:21:28.185 17:03:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:28.185 17:03:34 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:28.185 17:03:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:28.185 17:03:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:28.185 17:03:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:28.185 ************************************ 00:21:28.185 START TEST nvmf_async_init 00:21:28.185 ************************************ 00:21:28.185 17:03:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:28.185 * Looking for test storage... 00:21:28.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:28.185 17:03:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.185 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:28.185 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.185 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.185 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.185 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.185 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.185 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.185 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.185 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.185 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.185 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=86e056a2ac244843ac7f415ff48c8878 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:28.186 17:03:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.463 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:33.464 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:33.464 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:33.464 Found net devices under 0000:86:00.0: cvl_0_0 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:33.464 Found net devices under 0000:86:00.1: cvl_0_1 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:33.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:21:33.464 00:21:33.464 --- 10.0.0.2 ping statistics --- 00:21:33.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.464 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:21:33.464 00:21:33.464 --- 10.0.0.1 ping statistics --- 00:21:33.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.464 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=156618 00:21:33.464 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 156618 00:21:33.465 17:03:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:33.465 17:03:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 156618 ']' 00:21:33.465 17:03:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.465 17:03:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.465 17:03:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.465 17:03:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.465 17:03:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:33.465 [2024-07-15 17:03:40.036456] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:21:33.465 [2024-07-15 17:03:40.036506] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.465 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.465 [2024-07-15 17:03:40.096403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.724 [2024-07-15 17:03:40.176720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.724 [2024-07-15 17:03:40.176759] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.724 [2024-07-15 17:03:40.176767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.724 [2024-07-15 17:03:40.176774] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.724 [2024-07-15 17:03:40.176780] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.724 [2024-07-15 17:03:40.176802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.294 [2024-07-15 17:03:40.872750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.294 null0 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 86e056a2ac244843ac7f415ff48c8878 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.294 [2024-07-15 17:03:40.912966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.294 17:03:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.553 nvme0n1 00:21:34.553 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.553 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:34.553 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.553 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.553 [ 00:21:34.553 { 00:21:34.553 "name": "nvme0n1", 00:21:34.553 "aliases": [ 00:21:34.553 "86e056a2-ac24-4843-ac7f-415ff48c8878" 00:21:34.553 ], 00:21:34.553 "product_name": "NVMe disk", 00:21:34.553 "block_size": 512, 00:21:34.553 "num_blocks": 2097152, 00:21:34.553 "uuid": "86e056a2-ac24-4843-ac7f-415ff48c8878", 00:21:34.553 "assigned_rate_limits": { 00:21:34.553 "rw_ios_per_sec": 0, 00:21:34.553 "rw_mbytes_per_sec": 0, 00:21:34.553 "r_mbytes_per_sec": 0, 00:21:34.553 "w_mbytes_per_sec": 0 00:21:34.553 }, 00:21:34.553 "claimed": false, 00:21:34.553 "zoned": false, 00:21:34.553 "supported_io_types": { 00:21:34.553 "read": true, 00:21:34.553 "write": true, 00:21:34.553 "unmap": false, 00:21:34.553 "flush": true, 00:21:34.553 "reset": true, 00:21:34.553 "nvme_admin": true, 00:21:34.553 "nvme_io": true, 00:21:34.553 "nvme_io_md": false, 00:21:34.553 "write_zeroes": true, 00:21:34.553 "zcopy": false, 00:21:34.553 "get_zone_info": false, 00:21:34.553 "zone_management": false, 00:21:34.553 "zone_append": false, 00:21:34.553 "compare": true, 00:21:34.553 "compare_and_write": true, 00:21:34.553 "abort": true, 00:21:34.553 "seek_hole": false, 00:21:34.553 "seek_data": false, 00:21:34.553 "copy": true, 00:21:34.553 "nvme_iov_md": false 00:21:34.553 }, 00:21:34.553 "memory_domains": [ 00:21:34.553 { 00:21:34.553 "dma_device_id": "system", 00:21:34.553 "dma_device_type": 1 00:21:34.553 } 00:21:34.553 ], 00:21:34.553 "driver_specific": { 00:21:34.553 "nvme": [ 00:21:34.553 { 00:21:34.553 "trid": { 00:21:34.553 "trtype": "TCP", 00:21:34.553 "adrfam": "IPv4", 00:21:34.553 "traddr": "10.0.0.2", 00:21:34.553 "trsvcid": "4420", 00:21:34.553 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:34.553 }, 00:21:34.553 "ctrlr_data": { 00:21:34.553 "cntlid": 1, 00:21:34.553 "vendor_id": "0x8086", 00:21:34.553 "model_number": "SPDK bdev Controller", 00:21:34.553 "serial_number": "00000000000000000000", 00:21:34.553 "firmware_revision": "24.09", 00:21:34.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:34.553 "oacs": { 00:21:34.553 "security": 0, 00:21:34.553 "format": 0, 00:21:34.553 "firmware": 0, 00:21:34.553 "ns_manage": 0 00:21:34.553 }, 00:21:34.553 "multi_ctrlr": true, 00:21:34.553 "ana_reporting": false 00:21:34.553 }, 00:21:34.553 "vs": { 00:21:34.553 "nvme_version": "1.3" 00:21:34.553 }, 00:21:34.553 "ns_data": { 00:21:34.553 "id": 1, 00:21:34.553 "can_share": true 00:21:34.553 } 00:21:34.553 } 00:21:34.553 ], 00:21:34.553 "mp_policy": "active_passive" 00:21:34.553 } 00:21:34.553 } 00:21:34.553 ] 00:21:34.553 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.553 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:34.553 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.553 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.553 [2024-07-15 17:03:41.161482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:34.553 [2024-07-15 17:03:41.161539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2525250 (9): Bad file descriptor 00:21:34.815 [2024-07-15 17:03:41.293307] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:34.815 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.815 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:34.815 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.815 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.815 [ 00:21:34.815 { 00:21:34.815 "name": "nvme0n1", 00:21:34.815 "aliases": [ 00:21:34.815 "86e056a2-ac24-4843-ac7f-415ff48c8878" 00:21:34.815 ], 00:21:34.815 "product_name": "NVMe disk", 00:21:34.815 "block_size": 512, 00:21:34.815 "num_blocks": 2097152, 00:21:34.815 "uuid": "86e056a2-ac24-4843-ac7f-415ff48c8878", 00:21:34.815 "assigned_rate_limits": { 00:21:34.815 "rw_ios_per_sec": 0, 00:21:34.815 "rw_mbytes_per_sec": 0, 00:21:34.815 "r_mbytes_per_sec": 0, 00:21:34.815 "w_mbytes_per_sec": 0 00:21:34.815 }, 00:21:34.815 "claimed": false, 00:21:34.815 "zoned": false, 00:21:34.815 "supported_io_types": { 00:21:34.815 "read": true, 00:21:34.815 "write": true, 00:21:34.815 "unmap": false, 00:21:34.815 "flush": true, 00:21:34.815 "reset": true, 00:21:34.815 "nvme_admin": true, 00:21:34.815 "nvme_io": true, 00:21:34.815 "nvme_io_md": false, 00:21:34.815 "write_zeroes": true, 00:21:34.815 "zcopy": false, 00:21:34.815 "get_zone_info": false, 00:21:34.815 "zone_management": false, 00:21:34.815 "zone_append": false, 00:21:34.815 "compare": true, 00:21:34.815 "compare_and_write": true, 00:21:34.815 "abort": true, 00:21:34.815 "seek_hole": false, 00:21:34.815 "seek_data": false, 00:21:34.815 "copy": true, 00:21:34.815 "nvme_iov_md": false 00:21:34.815 }, 00:21:34.815 "memory_domains": [ 00:21:34.815 { 00:21:34.815 "dma_device_id": "system", 00:21:34.815 "dma_device_type": 1 00:21:34.815 } 00:21:34.815 ], 00:21:34.815 "driver_specific": { 00:21:34.815 "nvme": [ 00:21:34.815 { 00:21:34.815 "trid": { 00:21:34.815 "trtype": "TCP", 00:21:34.815 "adrfam": "IPv4", 00:21:34.815 "traddr": "10.0.0.2", 00:21:34.815 "trsvcid": "4420", 00:21:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:34.815 }, 00:21:34.815 "ctrlr_data": { 00:21:34.815 "cntlid": 2, 00:21:34.815 "vendor_id": "0x8086", 00:21:34.815 "model_number": "SPDK bdev Controller", 00:21:34.815 "serial_number": "00000000000000000000", 00:21:34.815 "firmware_revision": "24.09", 00:21:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:34.815 "oacs": { 00:21:34.815 "security": 0, 00:21:34.815 "format": 0, 00:21:34.815 "firmware": 0, 00:21:34.815 "ns_manage": 0 00:21:34.815 }, 00:21:34.815 "multi_ctrlr": true, 00:21:34.815 "ana_reporting": false 00:21:34.815 }, 00:21:34.815 "vs": { 00:21:34.815 "nvme_version": "1.3" 00:21:34.815 }, 00:21:34.815 "ns_data": { 00:21:34.815 "id": 1, 00:21:34.815 "can_share": true 00:21:34.815 } 00:21:34.815 } 00:21:34.815 ], 00:21:34.815 "mp_policy": "active_passive" 00:21:34.815 } 00:21:34.815 } 00:21:34.815 ] 00:21:34.815 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.815 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.815 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.N47tnYcI3t 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.N47tnYcI3t 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.816 [2024-07-15 17:03:41.338069] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:34.816 [2024-07-15 17:03:41.338183] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N47tnYcI3t 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.816 [2024-07-15 17:03:41.346088] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N47tnYcI3t 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.816 [2024-07-15 17:03:41.354122] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.816 [2024-07-15 17:03:41.354158] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:34.816 nvme0n1 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.816 [ 00:21:34.816 { 00:21:34.816 "name": "nvme0n1", 00:21:34.816 "aliases": [ 00:21:34.816 "86e056a2-ac24-4843-ac7f-415ff48c8878" 00:21:34.816 ], 00:21:34.816 "product_name": "NVMe disk", 00:21:34.816 "block_size": 512, 00:21:34.816 "num_blocks": 2097152, 00:21:34.816 "uuid": "86e056a2-ac24-4843-ac7f-415ff48c8878", 00:21:34.816 "assigned_rate_limits": { 00:21:34.816 "rw_ios_per_sec": 0, 00:21:34.816 "rw_mbytes_per_sec": 0, 00:21:34.816 "r_mbytes_per_sec": 0, 00:21:34.816 "w_mbytes_per_sec": 0 00:21:34.816 }, 00:21:34.816 "claimed": false, 00:21:34.816 "zoned": false, 00:21:34.816 "supported_io_types": { 00:21:34.816 "read": true, 00:21:34.816 "write": true, 00:21:34.816 "unmap": false, 00:21:34.816 "flush": true, 00:21:34.816 "reset": true, 00:21:34.816 "nvme_admin": true, 00:21:34.816 "nvme_io": true, 00:21:34.816 "nvme_io_md": false, 00:21:34.816 "write_zeroes": true, 00:21:34.816 "zcopy": false, 00:21:34.816 "get_zone_info": false, 00:21:34.816 "zone_management": false, 00:21:34.816 "zone_append": false, 00:21:34.816 "compare": true, 00:21:34.816 "compare_and_write": true, 00:21:34.816 "abort": true, 00:21:34.816 "seek_hole": false, 00:21:34.816 "seek_data": false, 00:21:34.816 "copy": true, 00:21:34.816 "nvme_iov_md": false 00:21:34.816 }, 00:21:34.816 "memory_domains": [ 00:21:34.816 { 00:21:34.816 "dma_device_id": "system", 00:21:34.816 "dma_device_type": 1 00:21:34.816 } 00:21:34.816 ], 00:21:34.816 "driver_specific": { 00:21:34.816 "nvme": [ 00:21:34.816 { 00:21:34.816 "trid": { 00:21:34.816 "trtype": "TCP", 00:21:34.816 "adrfam": "IPv4", 00:21:34.816 "traddr": "10.0.0.2", 00:21:34.816 "trsvcid": "4421", 00:21:34.816 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:34.816 }, 00:21:34.816 "ctrlr_data": { 00:21:34.816 "cntlid": 3, 00:21:34.816 "vendor_id": "0x8086", 00:21:34.816 "model_number": "SPDK bdev Controller", 00:21:34.816 "serial_number": "00000000000000000000", 00:21:34.816 "firmware_revision": "24.09", 00:21:34.816 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:34.816 "oacs": { 00:21:34.816 "security": 0, 00:21:34.816 "format": 0, 00:21:34.816 "firmware": 0, 00:21:34.816 "ns_manage": 0 00:21:34.816 }, 00:21:34.816 "multi_ctrlr": true, 00:21:34.816 "ana_reporting": false 00:21:34.816 }, 00:21:34.816 "vs": { 00:21:34.816 "nvme_version": "1.3" 00:21:34.816 }, 00:21:34.816 "ns_data": { 00:21:34.816 "id": 1, 00:21:34.816 "can_share": true 00:21:34.816 } 00:21:34.816 } 00:21:34.816 ], 00:21:34.816 "mp_policy": "active_passive" 00:21:34.816 } 00:21:34.816 } 00:21:34.816 ] 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.N47tnYcI3t 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:34.816 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:34.817 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:34.817 rmmod nvme_tcp 00:21:34.817 rmmod nvme_fabrics 00:21:34.817 rmmod nvme_keyring 00:21:35.112 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:35.112 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:35.112 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:35.112 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 156618 ']' 00:21:35.112 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 156618 00:21:35.112 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 156618 ']' 00:21:35.112 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 156618 00:21:35.112 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:21:35.112 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.112 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 156618 00:21:35.113 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:35.113 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:35.113 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 156618' 00:21:35.113 killing process with pid 156618 00:21:35.113 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 156618 00:21:35.113 [2024-07-15 17:03:41.542620] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:35.113 [2024-07-15 17:03:41.542645] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:35.113 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 156618 00:21:35.113 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:35.113 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:35.113 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:35.113 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.113 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:35.113 17:03:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.113 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.113 17:03:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.643 17:03:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:37.643 00:21:37.643 real 0m9.374s 00:21:37.643 user 0m3.380s 00:21:37.643 sys 0m4.471s 00:21:37.643 17:03:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:37.643 17:03:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.643 ************************************ 00:21:37.643 END TEST nvmf_async_init 00:21:37.643 ************************************ 00:21:37.643 17:03:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:37.644 17:03:43 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:37.644 17:03:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:37.644 17:03:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:37.644 17:03:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:37.644 ************************************ 00:21:37.644 START TEST dma 00:21:37.644 ************************************ 00:21:37.644 17:03:43 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:37.644 * Looking for test storage... 00:21:37.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:37.644 17:03:43 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.644 17:03:43 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.644 17:03:43 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.644 17:03:43 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.644 17:03:43 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.644 17:03:43 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.644 17:03:43 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.644 17:03:43 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:21:37.644 17:03:43 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.644 17:03:43 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.644 17:03:43 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:37.644 17:03:43 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:21:37.644 00:21:37.644 real 0m0.087s 00:21:37.644 user 0m0.035s 00:21:37.644 sys 0m0.060s 00:21:37.644 17:03:43 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:37.644 17:03:43 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:21:37.644 ************************************ 00:21:37.644 END TEST dma 00:21:37.644 ************************************ 00:21:37.644 17:03:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:37.644 17:03:43 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:37.644 17:03:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:37.644 17:03:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:37.644 17:03:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:37.644 ************************************ 00:21:37.644 START TEST nvmf_identify 00:21:37.644 ************************************ 00:21:37.644 17:03:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:37.644 * Looking for test storage... 00:21:37.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.644 17:03:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:42.931 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:42.931 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.931 17:03:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.931 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.931 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.931 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.931 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.931 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:42.932 Found net devices under 0000:86:00.0: cvl_0_0 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:42.932 Found net devices under 0000:86:00.1: cvl_0_1 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:42.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:21:42.932 00:21:42.932 --- 10.0.0.2 ping statistics --- 00:21:42.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.932 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:21:42.932 00:21:42.932 --- 10.0.0.1 ping statistics --- 00:21:42.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.932 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=160214 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 160214 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 160214 ']' 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.932 17:03:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:42.932 [2024-07-15 17:03:49.335096] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:21:42.932 [2024-07-15 17:03:49.335139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.932 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.932 [2024-07-15 17:03:49.391840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.932 [2024-07-15 17:03:49.470434] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.932 [2024-07-15 17:03:49.470467] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.932 [2024-07-15 17:03:49.470474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.932 [2024-07-15 17:03:49.470481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.932 [2024-07-15 17:03:49.470486] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.932 [2024-07-15 17:03:49.470528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.932 [2024-07-15 17:03:49.470612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.932 [2024-07-15 17:03:49.470718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.932 [2024-07-15 17:03:49.470719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.518 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.518 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:21:43.518 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:43.518 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.518 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:43.518 [2024-07-15 17:03:50.148134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.518 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.518 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:43.518 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:43.518 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:43.777 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:43.777 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.777 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:43.777 Malloc0 00:21:43.777 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.777 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:43.777 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.777 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:43.777 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:43.778 [2024-07-15 17:03:50.231930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:43.778 [ 00:21:43.778 { 00:21:43.778 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:43.778 "subtype": "Discovery", 00:21:43.778 "listen_addresses": [ 00:21:43.778 { 00:21:43.778 "trtype": "TCP", 00:21:43.778 "adrfam": "IPv4", 00:21:43.778 "traddr": "10.0.0.2", 00:21:43.778 "trsvcid": "4420" 00:21:43.778 } 00:21:43.778 ], 00:21:43.778 "allow_any_host": true, 00:21:43.778 "hosts": [] 00:21:43.778 }, 00:21:43.778 { 00:21:43.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.778 "subtype": "NVMe", 00:21:43.778 "listen_addresses": [ 00:21:43.778 { 00:21:43.778 "trtype": "TCP", 00:21:43.778 "adrfam": "IPv4", 00:21:43.778 "traddr": "10.0.0.2", 00:21:43.778 "trsvcid": "4420" 00:21:43.778 } 00:21:43.778 ], 00:21:43.778 "allow_any_host": true, 00:21:43.778 "hosts": [], 00:21:43.778 "serial_number": "SPDK00000000000001", 00:21:43.778 "model_number": "SPDK bdev Controller", 00:21:43.778 "max_namespaces": 32, 00:21:43.778 "min_cntlid": 1, 00:21:43.778 "max_cntlid": 65519, 00:21:43.778 "namespaces": [ 00:21:43.778 { 00:21:43.778 "nsid": 1, 00:21:43.778 "bdev_name": "Malloc0", 00:21:43.778 "name": "Malloc0", 00:21:43.778 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:43.778 "eui64": "ABCDEF0123456789", 00:21:43.778 "uuid": "f51e1593-66de-463b-84d4-940549ce793c" 00:21:43.778 } 00:21:43.778 ] 00:21:43.778 } 00:21:43.778 ] 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.778 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:43.778 [2024-07-15 17:03:50.284701] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:21:43.778 [2024-07-15 17:03:50.284748] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160462 ] 00:21:43.778 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.778 [2024-07-15 17:03:50.314810] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:43.778 [2024-07-15 17:03:50.314856] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:43.778 [2024-07-15 17:03:50.314861] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:43.778 [2024-07-15 17:03:50.314871] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:43.778 [2024-07-15 17:03:50.314877] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:43.778 [2024-07-15 17:03:50.315248] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:43.778 [2024-07-15 17:03:50.315277] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbcdec0 0 00:21:43.778 [2024-07-15 17:03:50.321237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:43.778 [2024-07-15 17:03:50.321247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:43.778 [2024-07-15 17:03:50.321252] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:43.778 [2024-07-15 17:03:50.321255] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:43.778 [2024-07-15 17:03:50.321291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.321296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.321300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbcdec0) 00:21:43.778 [2024-07-15 17:03:50.321313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:43.778 [2024-07-15 17:03:50.321328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc50e40, cid 0, qid 0 00:21:43.778 [2024-07-15 17:03:50.329235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.778 [2024-07-15 17:03:50.329243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.778 [2024-07-15 17:03:50.329247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc50e40) on tqpair=0xbcdec0 00:21:43.778 [2024-07-15 17:03:50.329263] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:43.778 [2024-07-15 17:03:50.329269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:43.778 [2024-07-15 17:03:50.329273] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:43.778 [2024-07-15 17:03:50.329285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbcdec0) 00:21:43.778 [2024-07-15 17:03:50.329299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-15 17:03:50.329311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc50e40, cid 0, qid 0 00:21:43.778 [2024-07-15 17:03:50.329487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.778 [2024-07-15 17:03:50.329493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.778 [2024-07-15 17:03:50.329496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc50e40) on tqpair=0xbcdec0 00:21:43.778 [2024-07-15 17:03:50.329504] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:43.778 [2024-07-15 17:03:50.329513] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:43.778 [2024-07-15 17:03:50.329519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbcdec0) 00:21:43.778 [2024-07-15 17:03:50.329532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-15 17:03:50.329542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc50e40, cid 0, qid 0 00:21:43.778 [2024-07-15 17:03:50.329612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.778 [2024-07-15 17:03:50.329617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.778 [2024-07-15 17:03:50.329620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc50e40) on tqpair=0xbcdec0 00:21:43.778 [2024-07-15 17:03:50.329628] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:43.778 [2024-07-15 17:03:50.329634] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:43.778 [2024-07-15 17:03:50.329640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbcdec0) 00:21:43.778 [2024-07-15 17:03:50.329652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-15 17:03:50.329661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc50e40, cid 0, qid 0 00:21:43.778 [2024-07-15 17:03:50.329727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.778 [2024-07-15 17:03:50.329733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.778 [2024-07-15 17:03:50.329735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329739] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc50e40) on tqpair=0xbcdec0 00:21:43.778 [2024-07-15 17:03:50.329743] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:43.778 [2024-07-15 17:03:50.329751] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329758] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbcdec0) 00:21:43.778 [2024-07-15 17:03:50.329763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.778 [2024-07-15 17:03:50.329772] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc50e40, cid 0, qid 0 00:21:43.778 [2024-07-15 17:03:50.329843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.778 [2024-07-15 17:03:50.329848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.778 [2024-07-15 17:03:50.329851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329854] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc50e40) on tqpair=0xbcdec0 00:21:43.778 [2024-07-15 17:03:50.329858] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:43.778 [2024-07-15 17:03:50.329862] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:43.778 [2024-07-15 17:03:50.329870] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:43.778 [2024-07-15 17:03:50.329975] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:43.778 [2024-07-15 17:03:50.329979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:43.778 [2024-07-15 17:03:50.329987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.778 [2024-07-15 17:03:50.329990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.329993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbcdec0) 00:21:43.779 [2024-07-15 17:03:50.329998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-15 17:03:50.330008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc50e40, cid 0, qid 0 00:21:43.779 [2024-07-15 17:03:50.330080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.779 [2024-07-15 17:03:50.330085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.779 [2024-07-15 17:03:50.330088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc50e40) on tqpair=0xbcdec0 00:21:43.779 [2024-07-15 17:03:50.330096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:43.779 [2024-07-15 17:03:50.330103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbcdec0) 00:21:43.779 [2024-07-15 17:03:50.330115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-15 17:03:50.330124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc50e40, cid 0, qid 0 00:21:43.779 [2024-07-15 17:03:50.330193] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.779 [2024-07-15 17:03:50.330198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.779 [2024-07-15 17:03:50.330201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330204] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc50e40) on tqpair=0xbcdec0 00:21:43.779 [2024-07-15 17:03:50.330208] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:43.779 [2024-07-15 17:03:50.330212] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:43.779 [2024-07-15 17:03:50.330218] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:43.779 [2024-07-15 17:03:50.330231] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:43.779 [2024-07-15 17:03:50.330240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbcdec0) 00:21:43.779 [2024-07-15 17:03:50.330249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-15 17:03:50.330258] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc50e40, cid 0, qid 0 00:21:43.779 [2024-07-15 17:03:50.330363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:43.779 [2024-07-15 17:03:50.330369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:43.779 [2024-07-15 17:03:50.330374] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330377] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbcdec0): datao=0, datal=4096, cccid=0 00:21:43.779 [2024-07-15 17:03:50.330381] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc50e40) on tqpair(0xbcdec0): expected_datao=0, payload_size=4096 00:21:43.779 [2024-07-15 17:03:50.330385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330391] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330395] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.779 [2024-07-15 17:03:50.330448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.779 [2024-07-15 17:03:50.330451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc50e40) on tqpair=0xbcdec0 00:21:43.779 [2024-07-15 17:03:50.330460] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:43.779 [2024-07-15 17:03:50.330467] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:43.779 [2024-07-15 17:03:50.330471] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:43.779 [2024-07-15 17:03:50.330475] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:43.779 [2024-07-15 17:03:50.330479] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:43.779 [2024-07-15 17:03:50.330483] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:43.779 [2024-07-15 17:03:50.330490] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:43.779 [2024-07-15 17:03:50.330496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330500] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330503] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbcdec0) 00:21:43.779 [2024-07-15 17:03:50.330509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:43.779 [2024-07-15 17:03:50.330518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc50e40, cid 0, qid 0 00:21:43.779 [2024-07-15 17:03:50.330599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.779 [2024-07-15 17:03:50.330604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.779 [2024-07-15 17:03:50.330607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc50e40) on tqpair=0xbcdec0 00:21:43.779 [2024-07-15 17:03:50.330617] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbcdec0) 00:21:43.779 [2024-07-15 17:03:50.330628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.779 [2024-07-15 17:03:50.330633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbcdec0) 00:21:43.779 [2024-07-15 17:03:50.330644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.779 [2024-07-15 17:03:50.330651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbcdec0) 00:21:43.779 [2024-07-15 17:03:50.330662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.779 [2024-07-15 17:03:50.330667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:43.779 [2024-07-15 17:03:50.330678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.779 [2024-07-15 17:03:50.330682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:43.779 [2024-07-15 17:03:50.330692] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:43.779 [2024-07-15 17:03:50.330698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbcdec0) 00:21:43.779 [2024-07-15 17:03:50.330707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-15 17:03:50.330717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc50e40, cid 0, qid 0 00:21:43.779 [2024-07-15 17:03:50.330721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc50fc0, cid 1, qid 0 00:21:43.779 [2024-07-15 17:03:50.330726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51140, cid 2, qid 0 00:21:43.779 [2024-07-15 17:03:50.330729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:43.779 [2024-07-15 17:03:50.330733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51440, cid 4, qid 0 00:21:43.779 [2024-07-15 17:03:50.330840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.779 [2024-07-15 17:03:50.330846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.779 [2024-07-15 17:03:50.330849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51440) on tqpair=0xbcdec0 00:21:43.779 [2024-07-15 17:03:50.330856] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:43.779 [2024-07-15 17:03:50.330860] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:43.779 [2024-07-15 17:03:50.330869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330872] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbcdec0) 00:21:43.779 [2024-07-15 17:03:50.330878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-15 17:03:50.330887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51440, cid 4, qid 0 00:21:43.779 [2024-07-15 17:03:50.330983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:43.779 [2024-07-15 17:03:50.330989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:43.779 [2024-07-15 17:03:50.330992] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.330995] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbcdec0): datao=0, datal=4096, cccid=4 00:21:43.779 [2024-07-15 17:03:50.330998] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc51440) on tqpair(0xbcdec0): expected_datao=0, payload_size=4096 00:21:43.779 [2024-07-15 17:03:50.331004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.331010] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.331013] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.331033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.779 [2024-07-15 17:03:50.331038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.779 [2024-07-15 17:03:50.331040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.331044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51440) on tqpair=0xbcdec0 00:21:43.779 [2024-07-15 17:03:50.331055] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:43.779 [2024-07-15 17:03:50.331075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.779 [2024-07-15 17:03:50.331079] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbcdec0) 00:21:43.779 [2024-07-15 17:03:50.331085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.779 [2024-07-15 17:03:50.331091] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.331095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.331097] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbcdec0) 00:21:43.780 [2024-07-15 17:03:50.331102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.780 [2024-07-15 17:03:50.331116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51440, cid 4, qid 0 00:21:43.780 [2024-07-15 17:03:50.331120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc515c0, cid 5, qid 0 00:21:43.780 [2024-07-15 17:03:50.331221] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:43.780 [2024-07-15 17:03:50.331233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:43.780 [2024-07-15 17:03:50.331236] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.331239] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbcdec0): datao=0, datal=1024, cccid=4 00:21:43.780 [2024-07-15 17:03:50.331243] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc51440) on tqpair(0xbcdec0): expected_datao=0, payload_size=1024 00:21:43.780 [2024-07-15 17:03:50.331247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.331252] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.331255] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.331260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.780 [2024-07-15 17:03:50.331265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.780 [2024-07-15 17:03:50.331268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.331271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc515c0) on tqpair=0xbcdec0 00:21:43.780 [2024-07-15 17:03:50.371377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.780 [2024-07-15 17:03:50.371389] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.780 [2024-07-15 17:03:50.371393] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.371396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51440) on tqpair=0xbcdec0 00:21:43.780 [2024-07-15 17:03:50.371412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.371416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbcdec0) 00:21:43.780 [2024-07-15 17:03:50.371423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-15 17:03:50.371445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51440, cid 4, qid 0 00:21:43.780 [2024-07-15 17:03:50.371525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:43.780 [2024-07-15 17:03:50.371531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:43.780 [2024-07-15 17:03:50.371534] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.371537] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbcdec0): datao=0, datal=3072, cccid=4 00:21:43.780 [2024-07-15 17:03:50.371541] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc51440) on tqpair(0xbcdec0): expected_datao=0, payload_size=3072 00:21:43.780 [2024-07-15 17:03:50.371545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.371569] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.371573] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.416233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:43.780 [2024-07-15 17:03:50.416242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:43.780 [2024-07-15 17:03:50.416246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.416249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51440) on tqpair=0xbcdec0 00:21:43.780 [2024-07-15 17:03:50.416257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.416261] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbcdec0) 00:21:43.780 [2024-07-15 17:03:50.416268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.780 [2024-07-15 17:03:50.416282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc51440, cid 4, qid 0 00:21:43.780 [2024-07-15 17:03:50.416437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:43.780 [2024-07-15 17:03:50.416442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:43.780 [2024-07-15 17:03:50.416445] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.416448] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbcdec0): datao=0, datal=8, cccid=4 00:21:43.780 [2024-07-15 17:03:50.416452] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc51440) on tqpair(0xbcdec0): expected_datao=0, payload_size=8 00:21:43.780 [2024-07-15 17:03:50.416456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.416461] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:43.780 [2024-07-15 17:03:50.416465] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.042 [2024-07-15 17:03:50.457374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.042 [2024-07-15 17:03:50.457383] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.042 [2024-07-15 17:03:50.457386] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.042 [2024-07-15 17:03:50.457390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51440) on tqpair=0xbcdec0 00:21:44.042 ===================================================== 00:21:44.042 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:44.042 ===================================================== 00:21:44.042 Controller Capabilities/Features 00:21:44.042 ================================ 00:21:44.042 Vendor ID: 0000 00:21:44.042 Subsystem Vendor ID: 0000 00:21:44.042 Serial Number: .................... 00:21:44.042 Model Number: ........................................ 00:21:44.042 Firmware Version: 24.09 00:21:44.042 Recommended Arb Burst: 0 00:21:44.042 IEEE OUI Identifier: 00 00 00 00:21:44.042 Multi-path I/O 00:21:44.042 May have multiple subsystem ports: No 00:21:44.042 May have multiple controllers: No 00:21:44.042 Associated with SR-IOV VF: No 00:21:44.042 Max Data Transfer Size: 131072 00:21:44.042 Max Number of Namespaces: 0 00:21:44.042 Max Number of I/O Queues: 1024 00:21:44.042 NVMe Specification Version (VS): 1.3 00:21:44.042 NVMe Specification Version (Identify): 1.3 00:21:44.042 Maximum Queue Entries: 128 00:21:44.042 Contiguous Queues Required: Yes 00:21:44.042 Arbitration Mechanisms Supported 00:21:44.042 Weighted Round Robin: Not Supported 00:21:44.042 Vendor Specific: Not Supported 00:21:44.042 Reset Timeout: 15000 ms 00:21:44.042 Doorbell Stride: 4 bytes 00:21:44.042 NVM Subsystem Reset: Not Supported 00:21:44.042 Command Sets Supported 00:21:44.042 NVM Command Set: Supported 00:21:44.042 Boot Partition: Not Supported 00:21:44.042 Memory Page Size Minimum: 4096 bytes 00:21:44.042 Memory Page Size Maximum: 4096 bytes 00:21:44.042 Persistent Memory Region: Not Supported 00:21:44.042 Optional Asynchronous Events Supported 00:21:44.042 Namespace Attribute Notices: Not Supported 00:21:44.042 Firmware Activation Notices: Not Supported 00:21:44.042 ANA Change Notices: Not Supported 00:21:44.042 PLE Aggregate Log Change Notices: Not Supported 00:21:44.042 LBA Status Info Alert Notices: Not Supported 00:21:44.042 EGE Aggregate Log Change Notices: Not Supported 00:21:44.042 Normal NVM Subsystem Shutdown event: Not Supported 00:21:44.042 Zone Descriptor Change Notices: Not Supported 00:21:44.042 Discovery Log Change Notices: Supported 00:21:44.042 Controller Attributes 00:21:44.042 128-bit Host Identifier: Not Supported 00:21:44.042 Non-Operational Permissive Mode: Not Supported 00:21:44.042 NVM Sets: Not Supported 00:21:44.042 Read Recovery Levels: Not Supported 00:21:44.042 Endurance Groups: Not Supported 00:21:44.042 Predictable Latency Mode: Not Supported 00:21:44.042 Traffic Based Keep ALive: Not Supported 00:21:44.042 Namespace Granularity: Not Supported 00:21:44.042 SQ Associations: Not Supported 00:21:44.042 UUID List: Not Supported 00:21:44.042 Multi-Domain Subsystem: Not Supported 00:21:44.042 Fixed Capacity Management: Not Supported 00:21:44.042 Variable Capacity Management: Not Supported 00:21:44.042 Delete Endurance Group: Not Supported 00:21:44.042 Delete NVM Set: Not Supported 00:21:44.042 Extended LBA Formats Supported: Not Supported 00:21:44.042 Flexible Data Placement Supported: Not Supported 00:21:44.042 00:21:44.042 Controller Memory Buffer Support 00:21:44.042 ================================ 00:21:44.042 Supported: No 00:21:44.042 00:21:44.042 Persistent Memory Region Support 00:21:44.042 ================================ 00:21:44.042 Supported: No 00:21:44.042 00:21:44.042 Admin Command Set Attributes 00:21:44.042 ============================ 00:21:44.042 Security Send/Receive: Not Supported 00:21:44.042 Format NVM: Not Supported 00:21:44.042 Firmware Activate/Download: Not Supported 00:21:44.042 Namespace Management: Not Supported 00:21:44.042 Device Self-Test: Not Supported 00:21:44.042 Directives: Not Supported 00:21:44.042 NVMe-MI: Not Supported 00:21:44.042 Virtualization Management: Not Supported 00:21:44.043 Doorbell Buffer Config: Not Supported 00:21:44.043 Get LBA Status Capability: Not Supported 00:21:44.043 Command & Feature Lockdown Capability: Not Supported 00:21:44.043 Abort Command Limit: 1 00:21:44.043 Async Event Request Limit: 4 00:21:44.043 Number of Firmware Slots: N/A 00:21:44.043 Firmware Slot 1 Read-Only: N/A 00:21:44.043 Firmware Activation Without Reset: N/A 00:21:44.043 Multiple Update Detection Support: N/A 00:21:44.043 Firmware Update Granularity: No Information Provided 00:21:44.043 Per-Namespace SMART Log: No 00:21:44.043 Asymmetric Namespace Access Log Page: Not Supported 00:21:44.043 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:44.043 Command Effects Log Page: Not Supported 00:21:44.043 Get Log Page Extended Data: Supported 00:21:44.043 Telemetry Log Pages: Not Supported 00:21:44.043 Persistent Event Log Pages: Not Supported 00:21:44.043 Supported Log Pages Log Page: May Support 00:21:44.043 Commands Supported & Effects Log Page: Not Supported 00:21:44.043 Feature Identifiers & Effects Log Page:May Support 00:21:44.043 NVMe-MI Commands & Effects Log Page: May Support 00:21:44.043 Data Area 4 for Telemetry Log: Not Supported 00:21:44.043 Error Log Page Entries Supported: 128 00:21:44.043 Keep Alive: Not Supported 00:21:44.043 00:21:44.043 NVM Command Set Attributes 00:21:44.043 ========================== 00:21:44.043 Submission Queue Entry Size 00:21:44.043 Max: 1 00:21:44.043 Min: 1 00:21:44.043 Completion Queue Entry Size 00:21:44.043 Max: 1 00:21:44.043 Min: 1 00:21:44.043 Number of Namespaces: 0 00:21:44.043 Compare Command: Not Supported 00:21:44.043 Write Uncorrectable Command: Not Supported 00:21:44.043 Dataset Management Command: Not Supported 00:21:44.043 Write Zeroes Command: Not Supported 00:21:44.043 Set Features Save Field: Not Supported 00:21:44.043 Reservations: Not Supported 00:21:44.043 Timestamp: Not Supported 00:21:44.043 Copy: Not Supported 00:21:44.043 Volatile Write Cache: Not Present 00:21:44.043 Atomic Write Unit (Normal): 1 00:21:44.043 Atomic Write Unit (PFail): 1 00:21:44.043 Atomic Compare & Write Unit: 1 00:21:44.043 Fused Compare & Write: Supported 00:21:44.043 Scatter-Gather List 00:21:44.043 SGL Command Set: Supported 00:21:44.043 SGL Keyed: Supported 00:21:44.043 SGL Bit Bucket Descriptor: Not Supported 00:21:44.043 SGL Metadata Pointer: Not Supported 00:21:44.043 Oversized SGL: Not Supported 00:21:44.043 SGL Metadata Address: Not Supported 00:21:44.043 SGL Offset: Supported 00:21:44.043 Transport SGL Data Block: Not Supported 00:21:44.043 Replay Protected Memory Block: Not Supported 00:21:44.043 00:21:44.043 Firmware Slot Information 00:21:44.043 ========================= 00:21:44.043 Active slot: 0 00:21:44.043 00:21:44.043 00:21:44.043 Error Log 00:21:44.043 ========= 00:21:44.043 00:21:44.043 Active Namespaces 00:21:44.043 ================= 00:21:44.043 Discovery Log Page 00:21:44.043 ================== 00:21:44.043 Generation Counter: 2 00:21:44.043 Number of Records: 2 00:21:44.043 Record Format: 0 00:21:44.043 00:21:44.043 Discovery Log Entry 0 00:21:44.043 ---------------------- 00:21:44.043 Transport Type: 3 (TCP) 00:21:44.043 Address Family: 1 (IPv4) 00:21:44.043 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:44.043 Entry Flags: 00:21:44.043 Duplicate Returned Information: 1 00:21:44.043 Explicit Persistent Connection Support for Discovery: 1 00:21:44.043 Transport Requirements: 00:21:44.043 Secure Channel: Not Required 00:21:44.043 Port ID: 0 (0x0000) 00:21:44.043 Controller ID: 65535 (0xffff) 00:21:44.043 Admin Max SQ Size: 128 00:21:44.043 Transport Service Identifier: 4420 00:21:44.043 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:44.043 Transport Address: 10.0.0.2 00:21:44.043 Discovery Log Entry 1 00:21:44.043 ---------------------- 00:21:44.043 Transport Type: 3 (TCP) 00:21:44.043 Address Family: 1 (IPv4) 00:21:44.043 Subsystem Type: 2 (NVM Subsystem) 00:21:44.043 Entry Flags: 00:21:44.043 Duplicate Returned Information: 0 00:21:44.043 Explicit Persistent Connection Support for Discovery: 0 00:21:44.043 Transport Requirements: 00:21:44.043 Secure Channel: Not Required 00:21:44.043 Port ID: 0 (0x0000) 00:21:44.043 Controller ID: 65535 (0xffff) 00:21:44.043 Admin Max SQ Size: 128 00:21:44.043 Transport Service Identifier: 4420 00:21:44.043 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:44.043 Transport Address: 10.0.0.2 [2024-07-15 17:03:50.457469] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:44.043 [2024-07-15 17:03:50.457479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc50e40) on tqpair=0xbcdec0 00:21:44.043 [2024-07-15 17:03:50.457485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.043 [2024-07-15 17:03:50.457490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc50fc0) on tqpair=0xbcdec0 00:21:44.043 [2024-07-15 17:03:50.457494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.043 [2024-07-15 17:03:50.457498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc51140) on tqpair=0xbcdec0 00:21:44.043 [2024-07-15 17:03:50.457502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.043 [2024-07-15 17:03:50.457508] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.043 [2024-07-15 17:03:50.457512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.043 [2024-07-15 17:03:50.457521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.457525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.457528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.043 [2024-07-15 17:03:50.457534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.043 [2024-07-15 17:03:50.457548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.043 [2024-07-15 17:03:50.457619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.043 [2024-07-15 17:03:50.457625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.043 [2024-07-15 17:03:50.457628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.457631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.043 [2024-07-15 17:03:50.457637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.457641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.457644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.043 [2024-07-15 17:03:50.457650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.043 [2024-07-15 17:03:50.457661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.043 [2024-07-15 17:03:50.457746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.043 [2024-07-15 17:03:50.457752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.043 [2024-07-15 17:03:50.457755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.457758] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.043 [2024-07-15 17:03:50.457762] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:44.043 [2024-07-15 17:03:50.457766] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:44.043 [2024-07-15 17:03:50.457774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.457777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.457780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.043 [2024-07-15 17:03:50.457786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.043 [2024-07-15 17:03:50.457795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.043 [2024-07-15 17:03:50.457865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.043 [2024-07-15 17:03:50.457870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.043 [2024-07-15 17:03:50.457873] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.457876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.043 [2024-07-15 17:03:50.457884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.457888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.457891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.043 [2024-07-15 17:03:50.457897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.043 [2024-07-15 17:03:50.457907] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.043 [2024-07-15 17:03:50.457977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.043 [2024-07-15 17:03:50.457983] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.043 [2024-07-15 17:03:50.457986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.457989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.043 [2024-07-15 17:03:50.457996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.458000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.043 [2024-07-15 17:03:50.458003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.043 [2024-07-15 17:03:50.458009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.043 [2024-07-15 17:03:50.458018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.043 [2024-07-15 17:03:50.458088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.043 [2024-07-15 17:03:50.458094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.044 [2024-07-15 17:03:50.458097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.044 [2024-07-15 17:03:50.458108] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.044 [2024-07-15 17:03:50.458120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.044 [2024-07-15 17:03:50.458129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.044 [2024-07-15 17:03:50.458201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.044 [2024-07-15 17:03:50.458206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.044 [2024-07-15 17:03:50.458209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.044 [2024-07-15 17:03:50.458221] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458231] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458234] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.044 [2024-07-15 17:03:50.458240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.044 [2024-07-15 17:03:50.458249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.044 [2024-07-15 17:03:50.458320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.044 [2024-07-15 17:03:50.458325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.044 [2024-07-15 17:03:50.458328] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.044 [2024-07-15 17:03:50.458340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.044 [2024-07-15 17:03:50.458352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.044 [2024-07-15 17:03:50.458361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.044 [2024-07-15 17:03:50.458432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.044 [2024-07-15 17:03:50.458438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.044 [2024-07-15 17:03:50.458441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458445] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.044 [2024-07-15 17:03:50.458452] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.044 [2024-07-15 17:03:50.458465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.044 [2024-07-15 17:03:50.458474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.044 [2024-07-15 17:03:50.458540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.044 [2024-07-15 17:03:50.458546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.044 [2024-07-15 17:03:50.458549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.044 [2024-07-15 17:03:50.458560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.044 [2024-07-15 17:03:50.458572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.044 [2024-07-15 17:03:50.458581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.044 [2024-07-15 17:03:50.458651] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.044 [2024-07-15 17:03:50.458656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.044 [2024-07-15 17:03:50.458659] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.044 [2024-07-15 17:03:50.458670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.044 [2024-07-15 17:03:50.458683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.044 [2024-07-15 17:03:50.458692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.044 [2024-07-15 17:03:50.458761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.044 [2024-07-15 17:03:50.458767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.044 [2024-07-15 17:03:50.458770] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.044 [2024-07-15 17:03:50.458781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.044 [2024-07-15 17:03:50.458793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.044 [2024-07-15 17:03:50.458802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.044 [2024-07-15 17:03:50.458873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.044 [2024-07-15 17:03:50.458880] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.044 [2024-07-15 17:03:50.458883] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.044 [2024-07-15 17:03:50.458894] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.044 [2024-07-15 17:03:50.458906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.044 [2024-07-15 17:03:50.458915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.044 [2024-07-15 17:03:50.458988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.044 [2024-07-15 17:03:50.458993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.044 [2024-07-15 17:03:50.458996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.458999] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.044 [2024-07-15 17:03:50.459007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459011] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.044 [2024-07-15 17:03:50.459019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.044 [2024-07-15 17:03:50.459028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.044 [2024-07-15 17:03:50.459145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.044 [2024-07-15 17:03:50.459150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.044 [2024-07-15 17:03:50.459153] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.044 [2024-07-15 17:03:50.459165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.044 [2024-07-15 17:03:50.459177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.044 [2024-07-15 17:03:50.459187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.044 [2024-07-15 17:03:50.459263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.044 [2024-07-15 17:03:50.459269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.044 [2024-07-15 17:03:50.459272] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.044 [2024-07-15 17:03:50.459283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.044 [2024-07-15 17:03:50.459295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.044 [2024-07-15 17:03:50.459304] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.044 [2024-07-15 17:03:50.459374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.044 [2024-07-15 17:03:50.459380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.044 [2024-07-15 17:03:50.459383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.044 [2024-07-15 17:03:50.459398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.044 [2024-07-15 17:03:50.459411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.044 [2024-07-15 17:03:50.459419] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.044 [2024-07-15 17:03:50.459489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.044 [2024-07-15 17:03:50.459495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.044 [2024-07-15 17:03:50.459498] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.044 [2024-07-15 17:03:50.459509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459512] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.044 [2024-07-15 17:03:50.459515] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.045 [2024-07-15 17:03:50.459521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.045 [2024-07-15 17:03:50.459530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.045 [2024-07-15 17:03:50.459602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.045 [2024-07-15 17:03:50.459608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.045 [2024-07-15 17:03:50.459611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.459614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.045 [2024-07-15 17:03:50.459622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.459625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.459628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.045 [2024-07-15 17:03:50.459634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.045 [2024-07-15 17:03:50.459643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.045 [2024-07-15 17:03:50.459713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.045 [2024-07-15 17:03:50.459718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.045 [2024-07-15 17:03:50.459721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.459724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.045 [2024-07-15 17:03:50.459732] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.459736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.459739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.045 [2024-07-15 17:03:50.459745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.045 [2024-07-15 17:03:50.459753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.045 [2024-07-15 17:03:50.459825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.045 [2024-07-15 17:03:50.459831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.045 [2024-07-15 17:03:50.459834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.459837] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.045 [2024-07-15 17:03:50.459847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.459850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.459853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.045 [2024-07-15 17:03:50.459859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.045 [2024-07-15 17:03:50.459868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.045 [2024-07-15 17:03:50.459937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.045 [2024-07-15 17:03:50.459943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.045 [2024-07-15 17:03:50.459946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.459949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.045 [2024-07-15 17:03:50.459956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.459960] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.459963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.045 [2024-07-15 17:03:50.459969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.045 [2024-07-15 17:03:50.459977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.045 [2024-07-15 17:03:50.463233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.045 [2024-07-15 17:03:50.463242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.045 [2024-07-15 17:03:50.463245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.463248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.045 [2024-07-15 17:03:50.463258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.463261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.463265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbcdec0) 00:21:44.045 [2024-07-15 17:03:50.463271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.045 [2024-07-15 17:03:50.463282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc512c0, cid 3, qid 0 00:21:44.045 [2024-07-15 17:03:50.463437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.045 [2024-07-15 17:03:50.463443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.045 [2024-07-15 17:03:50.463446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.463449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc512c0) on tqpair=0xbcdec0 00:21:44.045 [2024-07-15 17:03:50.463455] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:21:44.045 00:21:44.045 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:44.045 [2024-07-15 17:03:50.499152] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:21:44.045 [2024-07-15 17:03:50.499186] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160464 ] 00:21:44.045 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.045 [2024-07-15 17:03:50.528459] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:44.045 [2024-07-15 17:03:50.528501] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:44.045 [2024-07-15 17:03:50.528506] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:44.045 [2024-07-15 17:03:50.528516] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:44.045 [2024-07-15 17:03:50.528521] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:44.045 [2024-07-15 17:03:50.528823] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:44.045 [2024-07-15 17:03:50.528845] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc03ec0 0 00:21:44.045 [2024-07-15 17:03:50.542233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:44.045 [2024-07-15 17:03:50.542245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:44.045 [2024-07-15 17:03:50.542249] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:44.045 [2024-07-15 17:03:50.542252] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:44.045 [2024-07-15 17:03:50.542279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.542285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.542288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc03ec0) 00:21:44.045 [2024-07-15 17:03:50.542299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:44.045 [2024-07-15 17:03:50.542314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86e40, cid 0, qid 0 00:21:44.045 [2024-07-15 17:03:50.550236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.045 [2024-07-15 17:03:50.550244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.045 [2024-07-15 17:03:50.550247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.550251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc86e40) on tqpair=0xc03ec0 00:21:44.045 [2024-07-15 17:03:50.550261] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:44.045 [2024-07-15 17:03:50.550266] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:44.045 [2024-07-15 17:03:50.550271] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:44.045 [2024-07-15 17:03:50.550281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.550285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.550288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc03ec0) 00:21:44.045 [2024-07-15 17:03:50.550295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.045 [2024-07-15 17:03:50.550307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86e40, cid 0, qid 0 00:21:44.045 [2024-07-15 17:03:50.550481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.045 [2024-07-15 17:03:50.550487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.045 [2024-07-15 17:03:50.550490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.550493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc86e40) on tqpair=0xc03ec0 00:21:44.045 [2024-07-15 17:03:50.550498] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:44.045 [2024-07-15 17:03:50.550505] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:44.045 [2024-07-15 17:03:50.550511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.550516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.550519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc03ec0) 00:21:44.045 [2024-07-15 17:03:50.550525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.045 [2024-07-15 17:03:50.550535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86e40, cid 0, qid 0 00:21:44.045 [2024-07-15 17:03:50.550630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.045 [2024-07-15 17:03:50.550636] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.045 [2024-07-15 17:03:50.550639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.550642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc86e40) on tqpair=0xc03ec0 00:21:44.045 [2024-07-15 17:03:50.550646] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:44.045 [2024-07-15 17:03:50.550653] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:44.045 [2024-07-15 17:03:50.550658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.550662] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.045 [2024-07-15 17:03:50.550665] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc03ec0) 00:21:44.045 [2024-07-15 17:03:50.550670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.045 [2024-07-15 17:03:50.550680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86e40, cid 0, qid 0 00:21:44.045 [2024-07-15 17:03:50.550781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.045 [2024-07-15 17:03:50.550786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.046 [2024-07-15 17:03:50.550789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.550792] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc86e40) on tqpair=0xc03ec0 00:21:44.046 [2024-07-15 17:03:50.550796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:44.046 [2024-07-15 17:03:50.550804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.550807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.550810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc03ec0) 00:21:44.046 [2024-07-15 17:03:50.550816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.046 [2024-07-15 17:03:50.550825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86e40, cid 0, qid 0 00:21:44.046 [2024-07-15 17:03:50.550894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.046 [2024-07-15 17:03:50.550900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.046 [2024-07-15 17:03:50.550903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.550906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc86e40) on tqpair=0xc03ec0 00:21:44.046 [2024-07-15 17:03:50.550909] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:44.046 [2024-07-15 17:03:50.550913] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:44.046 [2024-07-15 17:03:50.550920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:44.046 [2024-07-15 17:03:50.551025] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:44.046 [2024-07-15 17:03:50.551028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:44.046 [2024-07-15 17:03:50.551037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.551040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.551043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc03ec0) 00:21:44.046 [2024-07-15 17:03:50.551049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.046 [2024-07-15 17:03:50.551058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86e40, cid 0, qid 0 00:21:44.046 [2024-07-15 17:03:50.551133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.046 [2024-07-15 17:03:50.551138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.046 [2024-07-15 17:03:50.551141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.551144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc86e40) on tqpair=0xc03ec0 00:21:44.046 [2024-07-15 17:03:50.551148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:44.046 [2024-07-15 17:03:50.551156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.551159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.551162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc03ec0) 00:21:44.046 [2024-07-15 17:03:50.551168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.046 [2024-07-15 17:03:50.551177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86e40, cid 0, qid 0 00:21:44.046 [2024-07-15 17:03:50.551284] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.046 [2024-07-15 17:03:50.551290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.046 [2024-07-15 17:03:50.551293] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.551296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc86e40) on tqpair=0xc03ec0 00:21:44.046 [2024-07-15 17:03:50.551299] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:44.046 [2024-07-15 17:03:50.551303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:44.046 [2024-07-15 17:03:50.551310] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:44.046 [2024-07-15 17:03:50.551321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:44.046 [2024-07-15 17:03:50.551329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.551332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc03ec0) 00:21:44.046 [2024-07-15 17:03:50.551338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.046 [2024-07-15 17:03:50.551348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86e40, cid 0, qid 0 00:21:44.046 [2024-07-15 17:03:50.551464] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.046 [2024-07-15 17:03:50.551470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.046 [2024-07-15 17:03:50.551473] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.551476] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc03ec0): datao=0, datal=4096, cccid=0 00:21:44.046 [2024-07-15 17:03:50.551480] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc86e40) on tqpair(0xc03ec0): expected_datao=0, payload_size=4096 00:21:44.046 [2024-07-15 17:03:50.551483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.551524] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.551528] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.596234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.046 [2024-07-15 17:03:50.596244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.046 [2024-07-15 17:03:50.596247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.596251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc86e40) on tqpair=0xc03ec0 00:21:44.046 [2024-07-15 17:03:50.596258] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:44.046 [2024-07-15 17:03:50.596265] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:44.046 [2024-07-15 17:03:50.596269] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:44.046 [2024-07-15 17:03:50.596273] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:44.046 [2024-07-15 17:03:50.596276] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:44.046 [2024-07-15 17:03:50.596280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:44.046 [2024-07-15 17:03:50.596288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:44.046 [2024-07-15 17:03:50.596295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.596299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.596302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc03ec0) 00:21:44.046 [2024-07-15 17:03:50.596309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:44.046 [2024-07-15 17:03:50.596320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86e40, cid 0, qid 0 00:21:44.046 [2024-07-15 17:03:50.596510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.046 [2024-07-15 17:03:50.596515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.046 [2024-07-15 17:03:50.596518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.596521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc86e40) on tqpair=0xc03ec0 00:21:44.046 [2024-07-15 17:03:50.596527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.596530] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.596533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc03ec0) 00:21:44.046 [2024-07-15 17:03:50.596538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.046 [2024-07-15 17:03:50.596544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.596547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.596550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc03ec0) 00:21:44.046 [2024-07-15 17:03:50.596554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.046 [2024-07-15 17:03:50.596559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.596562] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.596566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc03ec0) 00:21:44.046 [2024-07-15 17:03:50.596570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.046 [2024-07-15 17:03:50.596578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.596581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.046 [2024-07-15 17:03:50.596584] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc03ec0) 00:21:44.046 [2024-07-15 17:03:50.596589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.047 [2024-07-15 17:03:50.596593] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.596602] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.596608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.596612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc03ec0) 00:21:44.047 [2024-07-15 17:03:50.596617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.047 [2024-07-15 17:03:50.596629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86e40, cid 0, qid 0 00:21:44.047 [2024-07-15 17:03:50.596634] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc86fc0, cid 1, qid 0 00:21:44.047 [2024-07-15 17:03:50.596637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc87140, cid 2, qid 0 00:21:44.047 [2024-07-15 17:03:50.596642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc872c0, cid 3, qid 0 00:21:44.047 [2024-07-15 17:03:50.596646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc87440, cid 4, qid 0 00:21:44.047 [2024-07-15 17:03:50.596759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.047 [2024-07-15 17:03:50.596765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.047 [2024-07-15 17:03:50.596767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.596770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc87440) on tqpair=0xc03ec0 00:21:44.047 [2024-07-15 17:03:50.596775] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:44.047 [2024-07-15 17:03:50.596779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.596785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.596791] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.596796] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.596800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.596803] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc03ec0) 00:21:44.047 [2024-07-15 17:03:50.596808] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:44.047 [2024-07-15 17:03:50.596817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc87440, cid 4, qid 0 00:21:44.047 [2024-07-15 17:03:50.596921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.047 [2024-07-15 17:03:50.596927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.047 [2024-07-15 17:03:50.596930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.596933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc87440) on tqpair=0xc03ec0 00:21:44.047 [2024-07-15 17:03:50.596984] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.596993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.597001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc03ec0) 00:21:44.047 [2024-07-15 17:03:50.597010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.047 [2024-07-15 17:03:50.597020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc87440, cid 4, qid 0 00:21:44.047 [2024-07-15 17:03:50.597107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.047 [2024-07-15 17:03:50.597113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.047 [2024-07-15 17:03:50.597116] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597119] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc03ec0): datao=0, datal=4096, cccid=4 00:21:44.047 [2024-07-15 17:03:50.597123] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc87440) on tqpair(0xc03ec0): expected_datao=0, payload_size=4096 00:21:44.047 [2024-07-15 17:03:50.597126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597132] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597136] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.047 [2024-07-15 17:03:50.597179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.047 [2024-07-15 17:03:50.597182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc87440) on tqpair=0xc03ec0 00:21:44.047 [2024-07-15 17:03:50.597193] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:44.047 [2024-07-15 17:03:50.597200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.597208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.597214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc03ec0) 00:21:44.047 [2024-07-15 17:03:50.597223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.047 [2024-07-15 17:03:50.597241] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc87440, cid 4, qid 0 00:21:44.047 [2024-07-15 17:03:50.597335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.047 [2024-07-15 17:03:50.597341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.047 [2024-07-15 17:03:50.597344] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597347] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc03ec0): datao=0, datal=4096, cccid=4 00:21:44.047 [2024-07-15 17:03:50.597351] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc87440) on tqpair(0xc03ec0): expected_datao=0, payload_size=4096 00:21:44.047 [2024-07-15 17:03:50.597354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597360] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597363] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.047 [2024-07-15 17:03:50.597430] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.047 [2024-07-15 17:03:50.597433] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc87440) on tqpair=0xc03ec0 00:21:44.047 [2024-07-15 17:03:50.597449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.597457] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.597463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc03ec0) 00:21:44.047 [2024-07-15 17:03:50.597472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.047 [2024-07-15 17:03:50.597482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc87440, cid 4, qid 0 00:21:44.047 [2024-07-15 17:03:50.597566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.047 [2024-07-15 17:03:50.597572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.047 [2024-07-15 17:03:50.597575] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597578] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc03ec0): datao=0, datal=4096, cccid=4 00:21:44.047 [2024-07-15 17:03:50.597581] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc87440) on tqpair(0xc03ec0): expected_datao=0, payload_size=4096 00:21:44.047 [2024-07-15 17:03:50.597585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597590] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597593] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.047 [2024-07-15 17:03:50.597639] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.047 [2024-07-15 17:03:50.597642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc87440) on tqpair=0xc03ec0 00:21:44.047 [2024-07-15 17:03:50.597651] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.597658] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.597667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.597673] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.597677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.597681] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.597686] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:44.047 [2024-07-15 17:03:50.597690] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:44.047 [2024-07-15 17:03:50.597694] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:44.047 [2024-07-15 17:03:50.597706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc03ec0) 00:21:44.047 [2024-07-15 17:03:50.597716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.047 [2024-07-15 17:03:50.597723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc03ec0) 00:21:44.047 [2024-07-15 17:03:50.597735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.047 [2024-07-15 17:03:50.597747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc87440, cid 4, qid 0 00:21:44.047 [2024-07-15 17:03:50.597751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc875c0, cid 5, qid 0 00:21:44.047 [2024-07-15 17:03:50.597878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.047 [2024-07-15 17:03:50.597883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.047 [2024-07-15 17:03:50.597886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.047 [2024-07-15 17:03:50.597889] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc87440) on tqpair=0xc03ec0 00:21:44.047 [2024-07-15 17:03:50.597895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.048 [2024-07-15 17:03:50.597900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.048 [2024-07-15 17:03:50.597903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.597906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc875c0) on tqpair=0xc03ec0 00:21:44.048 [2024-07-15 17:03:50.597914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.597918] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc03ec0) 00:21:44.048 [2024-07-15 17:03:50.597923] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.048 [2024-07-15 17:03:50.597932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc875c0, cid 5, qid 0 00:21:44.048 [2024-07-15 17:03:50.598026] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.048 [2024-07-15 17:03:50.598032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.048 [2024-07-15 17:03:50.598035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc875c0) on tqpair=0xc03ec0 00:21:44.048 [2024-07-15 17:03:50.598045] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc03ec0) 00:21:44.048 [2024-07-15 17:03:50.598054] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.048 [2024-07-15 17:03:50.598063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc875c0, cid 5, qid 0 00:21:44.048 [2024-07-15 17:03:50.598135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.048 [2024-07-15 17:03:50.598141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.048 [2024-07-15 17:03:50.598144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc875c0) on tqpair=0xc03ec0 00:21:44.048 [2024-07-15 17:03:50.598154] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc03ec0) 00:21:44.048 [2024-07-15 17:03:50.598163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.048 [2024-07-15 17:03:50.598172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc875c0, cid 5, qid 0 00:21:44.048 [2024-07-15 17:03:50.598280] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.048 [2024-07-15 17:03:50.598286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.048 [2024-07-15 17:03:50.598289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc875c0) on tqpair=0xc03ec0 00:21:44.048 [2024-07-15 17:03:50.598305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc03ec0) 00:21:44.048 [2024-07-15 17:03:50.598315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.048 [2024-07-15 17:03:50.598321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc03ec0) 00:21:44.048 [2024-07-15 17:03:50.598329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.048 [2024-07-15 17:03:50.598335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc03ec0) 00:21:44.048 [2024-07-15 17:03:50.598343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.048 [2024-07-15 17:03:50.598349] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc03ec0) 00:21:44.048 [2024-07-15 17:03:50.598357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.048 [2024-07-15 17:03:50.598368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc875c0, cid 5, qid 0 00:21:44.048 [2024-07-15 17:03:50.598372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc87440, cid 4, qid 0 00:21:44.048 [2024-07-15 17:03:50.598376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc87740, cid 6, qid 0 00:21:44.048 [2024-07-15 17:03:50.598380] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc878c0, cid 7, qid 0 00:21:44.048 [2024-07-15 17:03:50.598527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.048 [2024-07-15 17:03:50.598533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.048 [2024-07-15 17:03:50.598536] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598539] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc03ec0): datao=0, datal=8192, cccid=5 00:21:44.048 [2024-07-15 17:03:50.598543] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc875c0) on tqpair(0xc03ec0): expected_datao=0, payload_size=8192 00:21:44.048 [2024-07-15 17:03:50.598547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598653] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598656] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.048 [2024-07-15 17:03:50.598666] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.048 [2024-07-15 17:03:50.598669] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598672] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc03ec0): datao=0, datal=512, cccid=4 00:21:44.048 [2024-07-15 17:03:50.598676] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc87440) on tqpair(0xc03ec0): expected_datao=0, payload_size=512 00:21:44.048 [2024-07-15 17:03:50.598679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598685] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598688] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.048 [2024-07-15 17:03:50.598699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.048 [2024-07-15 17:03:50.598701] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598704] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc03ec0): datao=0, datal=512, cccid=6 00:21:44.048 [2024-07-15 17:03:50.598708] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc87740) on tqpair(0xc03ec0): expected_datao=0, payload_size=512 00:21:44.048 [2024-07-15 17:03:50.598712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598717] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598720] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.048 [2024-07-15 17:03:50.598729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.048 [2024-07-15 17:03:50.598732] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598735] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc03ec0): datao=0, datal=4096, cccid=7 00:21:44.048 [2024-07-15 17:03:50.598739] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc878c0) on tqpair(0xc03ec0): expected_datao=0, payload_size=4096 00:21:44.048 [2024-07-15 17:03:50.598742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598748] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598751] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.048 [2024-07-15 17:03:50.598763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.048 [2024-07-15 17:03:50.598766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc875c0) on tqpair=0xc03ec0 00:21:44.048 [2024-07-15 17:03:50.598778] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.048 [2024-07-15 17:03:50.598783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.048 [2024-07-15 17:03:50.598786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc87440) on tqpair=0xc03ec0 00:21:44.048 [2024-07-15 17:03:50.598797] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.048 [2024-07-15 17:03:50.598802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.048 [2024-07-15 17:03:50.598805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc87740) on tqpair=0xc03ec0 00:21:44.048 [2024-07-15 17:03:50.598814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.048 [2024-07-15 17:03:50.598819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.048 [2024-07-15 17:03:50.598822] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.048 [2024-07-15 17:03:50.598825] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc878c0) on tqpair=0xc03ec0 00:21:44.048 ===================================================== 00:21:44.048 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:44.048 ===================================================== 00:21:44.048 Controller Capabilities/Features 00:21:44.048 ================================ 00:21:44.048 Vendor ID: 8086 00:21:44.048 Subsystem Vendor ID: 8086 00:21:44.048 Serial Number: SPDK00000000000001 00:21:44.048 Model Number: SPDK bdev Controller 00:21:44.048 Firmware Version: 24.09 00:21:44.048 Recommended Arb Burst: 6 00:21:44.048 IEEE OUI Identifier: e4 d2 5c 00:21:44.048 Multi-path I/O 00:21:44.048 May have multiple subsystem ports: Yes 00:21:44.048 May have multiple controllers: Yes 00:21:44.048 Associated with SR-IOV VF: No 00:21:44.048 Max Data Transfer Size: 131072 00:21:44.048 Max Number of Namespaces: 32 00:21:44.048 Max Number of I/O Queues: 127 00:21:44.048 NVMe Specification Version (VS): 1.3 00:21:44.048 NVMe Specification Version (Identify): 1.3 00:21:44.048 Maximum Queue Entries: 128 00:21:44.048 Contiguous Queues Required: Yes 00:21:44.048 Arbitration Mechanisms Supported 00:21:44.048 Weighted Round Robin: Not Supported 00:21:44.048 Vendor Specific: Not Supported 00:21:44.048 Reset Timeout: 15000 ms 00:21:44.048 Doorbell Stride: 4 bytes 00:21:44.048 NVM Subsystem Reset: Not Supported 00:21:44.048 Command Sets Supported 00:21:44.048 NVM Command Set: Supported 00:21:44.048 Boot Partition: Not Supported 00:21:44.048 Memory Page Size Minimum: 4096 bytes 00:21:44.048 Memory Page Size Maximum: 4096 bytes 00:21:44.048 Persistent Memory Region: Not Supported 00:21:44.048 Optional Asynchronous Events Supported 00:21:44.048 Namespace Attribute Notices: Supported 00:21:44.048 Firmware Activation Notices: Not Supported 00:21:44.049 ANA Change Notices: Not Supported 00:21:44.049 PLE Aggregate Log Change Notices: Not Supported 00:21:44.049 LBA Status Info Alert Notices: Not Supported 00:21:44.049 EGE Aggregate Log Change Notices: Not Supported 00:21:44.049 Normal NVM Subsystem Shutdown event: Not Supported 00:21:44.049 Zone Descriptor Change Notices: Not Supported 00:21:44.049 Discovery Log Change Notices: Not Supported 00:21:44.049 Controller Attributes 00:21:44.049 128-bit Host Identifier: Supported 00:21:44.049 Non-Operational Permissive Mode: Not Supported 00:21:44.049 NVM Sets: Not Supported 00:21:44.049 Read Recovery Levels: Not Supported 00:21:44.049 Endurance Groups: Not Supported 00:21:44.049 Predictable Latency Mode: Not Supported 00:21:44.049 Traffic Based Keep ALive: Not Supported 00:21:44.049 Namespace Granularity: Not Supported 00:21:44.049 SQ Associations: Not Supported 00:21:44.049 UUID List: Not Supported 00:21:44.049 Multi-Domain Subsystem: Not Supported 00:21:44.049 Fixed Capacity Management: Not Supported 00:21:44.049 Variable Capacity Management: Not Supported 00:21:44.049 Delete Endurance Group: Not Supported 00:21:44.049 Delete NVM Set: Not Supported 00:21:44.049 Extended LBA Formats Supported: Not Supported 00:21:44.049 Flexible Data Placement Supported: Not Supported 00:21:44.049 00:21:44.049 Controller Memory Buffer Support 00:21:44.049 ================================ 00:21:44.049 Supported: No 00:21:44.049 00:21:44.049 Persistent Memory Region Support 00:21:44.049 ================================ 00:21:44.049 Supported: No 00:21:44.049 00:21:44.049 Admin Command Set Attributes 00:21:44.049 ============================ 00:21:44.049 Security Send/Receive: Not Supported 00:21:44.049 Format NVM: Not Supported 00:21:44.049 Firmware Activate/Download: Not Supported 00:21:44.049 Namespace Management: Not Supported 00:21:44.049 Device Self-Test: Not Supported 00:21:44.049 Directives: Not Supported 00:21:44.049 NVMe-MI: Not Supported 00:21:44.049 Virtualization Management: Not Supported 00:21:44.049 Doorbell Buffer Config: Not Supported 00:21:44.049 Get LBA Status Capability: Not Supported 00:21:44.049 Command & Feature Lockdown Capability: Not Supported 00:21:44.049 Abort Command Limit: 4 00:21:44.049 Async Event Request Limit: 4 00:21:44.049 Number of Firmware Slots: N/A 00:21:44.049 Firmware Slot 1 Read-Only: N/A 00:21:44.049 Firmware Activation Without Reset: N/A 00:21:44.049 Multiple Update Detection Support: N/A 00:21:44.049 Firmware Update Granularity: No Information Provided 00:21:44.049 Per-Namespace SMART Log: No 00:21:44.049 Asymmetric Namespace Access Log Page: Not Supported 00:21:44.049 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:44.049 Command Effects Log Page: Supported 00:21:44.049 Get Log Page Extended Data: Supported 00:21:44.049 Telemetry Log Pages: Not Supported 00:21:44.049 Persistent Event Log Pages: Not Supported 00:21:44.049 Supported Log Pages Log Page: May Support 00:21:44.049 Commands Supported & Effects Log Page: Not Supported 00:21:44.049 Feature Identifiers & Effects Log Page:May Support 00:21:44.049 NVMe-MI Commands & Effects Log Page: May Support 00:21:44.049 Data Area 4 for Telemetry Log: Not Supported 00:21:44.049 Error Log Page Entries Supported: 128 00:21:44.049 Keep Alive: Supported 00:21:44.049 Keep Alive Granularity: 10000 ms 00:21:44.049 00:21:44.049 NVM Command Set Attributes 00:21:44.049 ========================== 00:21:44.049 Submission Queue Entry Size 00:21:44.049 Max: 64 00:21:44.049 Min: 64 00:21:44.049 Completion Queue Entry Size 00:21:44.049 Max: 16 00:21:44.049 Min: 16 00:21:44.049 Number of Namespaces: 32 00:21:44.049 Compare Command: Supported 00:21:44.049 Write Uncorrectable Command: Not Supported 00:21:44.049 Dataset Management Command: Supported 00:21:44.049 Write Zeroes Command: Supported 00:21:44.049 Set Features Save Field: Not Supported 00:21:44.049 Reservations: Supported 00:21:44.049 Timestamp: Not Supported 00:21:44.049 Copy: Supported 00:21:44.049 Volatile Write Cache: Present 00:21:44.049 Atomic Write Unit (Normal): 1 00:21:44.049 Atomic Write Unit (PFail): 1 00:21:44.049 Atomic Compare & Write Unit: 1 00:21:44.049 Fused Compare & Write: Supported 00:21:44.049 Scatter-Gather List 00:21:44.049 SGL Command Set: Supported 00:21:44.049 SGL Keyed: Supported 00:21:44.049 SGL Bit Bucket Descriptor: Not Supported 00:21:44.049 SGL Metadata Pointer: Not Supported 00:21:44.049 Oversized SGL: Not Supported 00:21:44.049 SGL Metadata Address: Not Supported 00:21:44.049 SGL Offset: Supported 00:21:44.049 Transport SGL Data Block: Not Supported 00:21:44.049 Replay Protected Memory Block: Not Supported 00:21:44.049 00:21:44.049 Firmware Slot Information 00:21:44.049 ========================= 00:21:44.049 Active slot: 1 00:21:44.049 Slot 1 Firmware Revision: 24.09 00:21:44.049 00:21:44.049 00:21:44.049 Commands Supported and Effects 00:21:44.049 ============================== 00:21:44.049 Admin Commands 00:21:44.049 -------------- 00:21:44.049 Get Log Page (02h): Supported 00:21:44.049 Identify (06h): Supported 00:21:44.049 Abort (08h): Supported 00:21:44.049 Set Features (09h): Supported 00:21:44.049 Get Features (0Ah): Supported 00:21:44.049 Asynchronous Event Request (0Ch): Supported 00:21:44.049 Keep Alive (18h): Supported 00:21:44.049 I/O Commands 00:21:44.049 ------------ 00:21:44.049 Flush (00h): Supported LBA-Change 00:21:44.049 Write (01h): Supported LBA-Change 00:21:44.049 Read (02h): Supported 00:21:44.049 Compare (05h): Supported 00:21:44.049 Write Zeroes (08h): Supported LBA-Change 00:21:44.049 Dataset Management (09h): Supported LBA-Change 00:21:44.049 Copy (19h): Supported LBA-Change 00:21:44.049 00:21:44.049 Error Log 00:21:44.049 ========= 00:21:44.049 00:21:44.049 Arbitration 00:21:44.049 =========== 00:21:44.049 Arbitration Burst: 1 00:21:44.049 00:21:44.049 Power Management 00:21:44.049 ================ 00:21:44.049 Number of Power States: 1 00:21:44.049 Current Power State: Power State #0 00:21:44.049 Power State #0: 00:21:44.049 Max Power: 0.00 W 00:21:44.049 Non-Operational State: Operational 00:21:44.049 Entry Latency: Not Reported 00:21:44.049 Exit Latency: Not Reported 00:21:44.049 Relative Read Throughput: 0 00:21:44.049 Relative Read Latency: 0 00:21:44.049 Relative Write Throughput: 0 00:21:44.049 Relative Write Latency: 0 00:21:44.049 Idle Power: Not Reported 00:21:44.049 Active Power: Not Reported 00:21:44.049 Non-Operational Permissive Mode: Not Supported 00:21:44.049 00:21:44.049 Health Information 00:21:44.049 ================== 00:21:44.049 Critical Warnings: 00:21:44.049 Available Spare Space: OK 00:21:44.049 Temperature: OK 00:21:44.049 Device Reliability: OK 00:21:44.049 Read Only: No 00:21:44.049 Volatile Memory Backup: OK 00:21:44.049 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:44.049 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:44.049 Available Spare: 0% 00:21:44.049 Available Spare Threshold: 0% 00:21:44.049 Life Percentage Used:[2024-07-15 17:03:50.598907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.049 [2024-07-15 17:03:50.598912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc03ec0) 00:21:44.049 [2024-07-15 17:03:50.598918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.049 [2024-07-15 17:03:50.598929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc878c0, cid 7, qid 0 00:21:44.049 [2024-07-15 17:03:50.599055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.049 [2024-07-15 17:03:50.599061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.049 [2024-07-15 17:03:50.599064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.049 [2024-07-15 17:03:50.599068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc878c0) on tqpair=0xc03ec0 00:21:44.049 [2024-07-15 17:03:50.599097] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:44.049 [2024-07-15 17:03:50.599106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc86e40) on tqpair=0xc03ec0 00:21:44.049 [2024-07-15 17:03:50.599111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.049 [2024-07-15 17:03:50.599116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc86fc0) on tqpair=0xc03ec0 00:21:44.049 [2024-07-15 17:03:50.599120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.049 [2024-07-15 17:03:50.599124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc87140) on tqpair=0xc03ec0 00:21:44.049 [2024-07-15 17:03:50.599128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.049 [2024-07-15 17:03:50.599132] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc872c0) on tqpair=0xc03ec0 00:21:44.049 [2024-07-15 17:03:50.599136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.049 [2024-07-15 17:03:50.599142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.049 [2024-07-15 17:03:50.599146] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.049 [2024-07-15 17:03:50.599149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc03ec0) 00:21:44.049 [2024-07-15 17:03:50.599154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.049 [2024-07-15 17:03:50.599165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc872c0, cid 3, qid 0 00:21:44.049 [2024-07-15 17:03:50.599255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.049 [2024-07-15 17:03:50.599261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.049 [2024-07-15 17:03:50.599264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.049 [2024-07-15 17:03:50.599267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc872c0) on tqpair=0xc03ec0 00:21:44.050 [2024-07-15 17:03:50.599273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.050 [2024-07-15 17:03:50.599276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.050 [2024-07-15 17:03:50.599279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc03ec0) 00:21:44.050 [2024-07-15 17:03:50.599285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.050 [2024-07-15 17:03:50.599298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc872c0, cid 3, qid 0 00:21:44.050 [2024-07-15 17:03:50.599406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.050 [2024-07-15 17:03:50.599412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.050 [2024-07-15 17:03:50.599415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.050 [2024-07-15 17:03:50.599418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc872c0) on tqpair=0xc03ec0 00:21:44.050 [2024-07-15 17:03:50.599422] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:44.050 [2024-07-15 17:03:50.599425] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:44.050 [2024-07-15 17:03:50.599433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.050 [2024-07-15 17:03:50.599437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.050 [2024-07-15 17:03:50.599440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc03ec0) 00:21:44.050 [2024-07-15 17:03:50.599445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.050 [2024-07-15 17:03:50.599456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc872c0, cid 3, qid 0 00:21:44.050 [2024-07-15 17:03:50.599529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.050 [2024-07-15 17:03:50.599535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.050 [2024-07-15 17:03:50.599538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.050 [2024-07-15 17:03:50.599541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc872c0) on tqpair=0xc03ec0 00:21:44.050 [2024-07-15 17:03:50.599550] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.050 [2024-07-15 17:03:50.599553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.050 [2024-07-15 17:03:50.599556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc03ec0) 00:21:44.050 [2024-07-15 17:03:50.599562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.050 [2024-07-15 17:03:50.599571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc872c0, cid 3, qid 0 00:21:44.050 [2024-07-15 17:03:50.603232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.050 [2024-07-15 17:03:50.603240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.050 [2024-07-15 17:03:50.603243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.050 [2024-07-15 17:03:50.603246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc872c0) on tqpair=0xc03ec0 00:21:44.050 [2024-07-15 17:03:50.603257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.050 [2024-07-15 17:03:50.603260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.050 [2024-07-15 17:03:50.603263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc03ec0) 00:21:44.050 [2024-07-15 17:03:50.603269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.050 [2024-07-15 17:03:50.603280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc872c0, cid 3, qid 0 00:21:44.050 [2024-07-15 17:03:50.603474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.050 [2024-07-15 17:03:50.603479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.050 [2024-07-15 17:03:50.603482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.050 [2024-07-15 17:03:50.603486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc872c0) on tqpair=0xc03ec0 00:21:44.050 [2024-07-15 17:03:50.603491] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:21:44.050 0% 00:21:44.050 Data Units Read: 0 00:21:44.050 Data Units Written: 0 00:21:44.050 Host Read Commands: 0 00:21:44.050 Host Write Commands: 0 00:21:44.050 Controller Busy Time: 0 minutes 00:21:44.050 Power Cycles: 0 00:21:44.050 Power On Hours: 0 hours 00:21:44.050 Unsafe Shutdowns: 0 00:21:44.050 Unrecoverable Media Errors: 0 00:21:44.050 Lifetime Error Log Entries: 0 00:21:44.050 Warning Temperature Time: 0 minutes 00:21:44.050 Critical Temperature Time: 0 minutes 00:21:44.050 00:21:44.050 Number of Queues 00:21:44.050 ================ 00:21:44.050 Number of I/O Submission Queues: 127 00:21:44.050 Number of I/O Completion Queues: 127 00:21:44.050 00:21:44.050 Active Namespaces 00:21:44.050 ================= 00:21:44.050 Namespace ID:1 00:21:44.050 Error Recovery Timeout: Unlimited 00:21:44.050 Command Set Identifier: NVM (00h) 00:21:44.050 Deallocate: Supported 00:21:44.050 Deallocated/Unwritten Error: Not Supported 00:21:44.050 Deallocated Read Value: Unknown 00:21:44.050 Deallocate in Write Zeroes: Not Supported 00:21:44.050 Deallocated Guard Field: 0xFFFF 00:21:44.050 Flush: Supported 00:21:44.050 Reservation: Supported 00:21:44.050 Namespace Sharing Capabilities: Multiple Controllers 00:21:44.050 Size (in LBAs): 131072 (0GiB) 00:21:44.050 Capacity (in LBAs): 131072 (0GiB) 00:21:44.050 Utilization (in LBAs): 131072 (0GiB) 00:21:44.050 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:44.050 EUI64: ABCDEF0123456789 00:21:44.050 UUID: f51e1593-66de-463b-84d4-940549ce793c 00:21:44.050 Thin Provisioning: Not Supported 00:21:44.050 Per-NS Atomic Units: Yes 00:21:44.050 Atomic Boundary Size (Normal): 0 00:21:44.050 Atomic Boundary Size (PFail): 0 00:21:44.050 Atomic Boundary Offset: 0 00:21:44.050 Maximum Single Source Range Length: 65535 00:21:44.050 Maximum Copy Length: 65535 00:21:44.050 Maximum Source Range Count: 1 00:21:44.050 NGUID/EUI64 Never Reused: No 00:21:44.050 Namespace Write Protected: No 00:21:44.050 Number of LBA Formats: 1 00:21:44.050 Current LBA Format: LBA Format #00 00:21:44.050 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:44.050 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.050 rmmod nvme_tcp 00:21:44.050 rmmod nvme_fabrics 00:21:44.050 rmmod nvme_keyring 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 160214 ']' 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 160214 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 160214 ']' 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 160214 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.050 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 160214 00:21:44.310 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:44.310 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:44.310 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 160214' 00:21:44.310 killing process with pid 160214 00:21:44.310 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 160214 00:21:44.310 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 160214 00:21:44.310 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:44.310 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.310 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.310 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.310 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.310 17:03:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.310 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.310 17:03:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.843 17:03:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:46.843 00:21:46.843 real 0m9.000s 00:21:46.843 user 0m7.204s 00:21:46.843 sys 0m4.301s 00:21:46.843 17:03:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:46.843 17:03:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:46.843 ************************************ 00:21:46.843 END TEST nvmf_identify 00:21:46.843 ************************************ 00:21:46.843 17:03:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:46.843 17:03:53 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:46.843 17:03:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:46.843 17:03:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.843 17:03:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.843 ************************************ 00:21:46.843 START TEST nvmf_perf 00:21:46.843 ************************************ 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:46.843 * Looking for test storage... 00:21:46.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:46.843 17:03:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:52.118 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:52.118 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.118 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:52.119 Found net devices under 0000:86:00.0: cvl_0_0 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:52.119 Found net devices under 0000:86:00.1: cvl_0_1 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:52.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:21:52.119 00:21:52.119 --- 10.0.0.2 ping statistics --- 00:21:52.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.119 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:21:52.119 00:21:52.119 --- 10.0.0.1 ping statistics --- 00:21:52.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.119 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=163971 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 163971 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 163971 ']' 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.119 17:03:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:52.119 [2024-07-15 17:03:58.721756] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:21:52.119 [2024-07-15 17:03:58.721799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.119 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.119 [2024-07-15 17:03:58.778601] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.377 [2024-07-15 17:03:58.858419] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.377 [2024-07-15 17:03:58.858458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.377 [2024-07-15 17:03:58.858465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.377 [2024-07-15 17:03:58.858471] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.377 [2024-07-15 17:03:58.858476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.377 [2024-07-15 17:03:58.858520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.377 [2024-07-15 17:03:58.858614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.377 [2024-07-15 17:03:58.858704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.377 [2024-07-15 17:03:58.858705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.942 17:03:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.942 17:03:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:21:52.942 17:03:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:52.942 17:03:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.942 17:03:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:52.942 17:03:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.942 17:03:59 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:52.942 17:03:59 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:56.226 17:04:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:56.226 17:04:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:56.226 17:04:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:21:56.226 17:04:02 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:56.484 17:04:02 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:56.484 17:04:02 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:21:56.484 17:04:02 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:56.484 17:04:02 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:56.484 17:04:02 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:56.484 [2024-07-15 17:04:03.115732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.484 17:04:03 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:56.743 17:04:03 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:56.743 17:04:03 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:57.001 17:04:03 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:57.001 17:04:03 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:57.259 17:04:03 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.259 [2024-07-15 17:04:03.862555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.259 17:04:03 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:57.520 17:04:04 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:21:57.520 17:04:04 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:57.520 17:04:04 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:57.520 17:04:04 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:58.932 Initializing NVMe Controllers 00:21:58.932 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:21:58.932 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:21:58.932 Initialization complete. Launching workers. 00:21:58.932 ======================================================== 00:21:58.932 Latency(us) 00:21:58.932 Device Information : IOPS MiB/s Average min max 00:21:58.932 PCIE (0000:5e:00.0) NSID 1 from core 0: 97655.20 381.47 327.33 34.27 5206.83 00:21:58.932 ======================================================== 00:21:58.932 Total : 97655.20 381.47 327.33 34.27 5206.83 00:21:58.932 00:21:58.932 17:04:05 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:58.932 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.309 Initializing NVMe Controllers 00:22:00.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:00.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:00.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:00.309 Initialization complete. Launching workers. 00:22:00.309 ======================================================== 00:22:00.309 Latency(us) 00:22:00.309 Device Information : IOPS MiB/s Average min max 00:22:00.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 87.00 0.34 11503.35 143.99 45650.50 00:22:00.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19754.83 5159.36 50874.00 00:22:00.309 ======================================================== 00:22:00.309 Total : 138.00 0.54 14552.81 143.99 50874.00 00:22:00.309 00:22:00.309 17:04:06 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:00.309 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.246 Initializing NVMe Controllers 00:22:01.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:01.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:01.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:01.246 Initialization complete. Launching workers. 00:22:01.246 ======================================================== 00:22:01.246 Latency(us) 00:22:01.246 Device Information : IOPS MiB/s Average min max 00:22:01.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10759.99 42.03 2987.77 384.31 6646.74 00:22:01.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3878.00 15.15 8290.53 5444.96 17839.86 00:22:01.246 ======================================================== 00:22:01.246 Total : 14637.99 57.18 4392.61 384.31 17839.86 00:22:01.246 00:22:01.246 17:04:07 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:01.246 17:04:07 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:01.246 17:04:07 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:01.505 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.040 Initializing NVMe Controllers 00:22:04.040 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:04.040 Controller IO queue size 128, less than required. 00:22:04.040 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:04.040 Controller IO queue size 128, less than required. 00:22:04.040 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:04.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:04.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:04.040 Initialization complete. Launching workers. 00:22:04.040 ======================================================== 00:22:04.040 Latency(us) 00:22:04.040 Device Information : IOPS MiB/s Average min max 00:22:04.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1317.67 329.42 98721.74 48739.28 144710.89 00:22:04.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 622.34 155.59 217935.44 69820.11 360427.44 00:22:04.040 ======================================================== 00:22:04.040 Total : 1940.01 485.00 136964.73 48739.28 360427.44 00:22:04.040 00:22:04.040 17:04:10 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:04.040 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.040 No valid NVMe controllers or AIO or URING devices found 00:22:04.040 Initializing NVMe Controllers 00:22:04.040 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:04.040 Controller IO queue size 128, less than required. 00:22:04.040 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:04.040 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:04.040 Controller IO queue size 128, less than required. 00:22:04.040 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:04.040 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:04.040 WARNING: Some requested NVMe devices were skipped 00:22:04.040 17:04:10 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:04.040 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.570 Initializing NVMe Controllers 00:22:06.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:06.570 Controller IO queue size 128, less than required. 00:22:06.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.570 Controller IO queue size 128, less than required. 00:22:06.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:06.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:06.570 Initialization complete. Launching workers. 00:22:06.570 00:22:06.570 ==================== 00:22:06.570 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:06.570 TCP transport: 00:22:06.570 polls: 29272 00:22:06.570 idle_polls: 13357 00:22:06.570 sock_completions: 15915 00:22:06.570 nvme_completions: 5309 00:22:06.570 submitted_requests: 7948 00:22:06.570 queued_requests: 1 00:22:06.570 00:22:06.570 ==================== 00:22:06.570 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:06.570 TCP transport: 00:22:06.570 polls: 26025 00:22:06.570 idle_polls: 9974 00:22:06.570 sock_completions: 16051 00:22:06.570 nvme_completions: 5479 00:22:06.570 submitted_requests: 8272 00:22:06.570 queued_requests: 1 00:22:06.570 ======================================================== 00:22:06.570 Latency(us) 00:22:06.570 Device Information : IOPS MiB/s Average min max 00:22:06.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1326.25 331.56 98820.17 56609.68 173679.73 00:22:06.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1368.72 342.18 95387.44 41093.95 145558.35 00:22:06.570 ======================================================== 00:22:06.570 Total : 2694.97 673.74 97076.76 41093.95 173679.73 00:22:06.570 00:22:06.570 17:04:12 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:06.570 17:04:12 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.570 17:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:06.570 17:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:06.570 17:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:06.570 17:04:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.570 17:04:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:06.570 17:04:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.570 17:04:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:06.570 17:04:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.570 17:04:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.570 rmmod nvme_tcp 00:22:06.570 rmmod nvme_fabrics 00:22:06.570 rmmod nvme_keyring 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 163971 ']' 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 163971 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 163971 ']' 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 163971 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 163971 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 163971' 00:22:06.829 killing process with pid 163971 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 163971 00:22:06.829 17:04:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 163971 00:22:08.206 17:04:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:08.206 17:04:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:08.206 17:04:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:08.206 17:04:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:08.206 17:04:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:08.206 17:04:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.206 17:04:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.206 17:04:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.741 17:04:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:10.741 00:22:10.741 real 0m23.779s 00:22:10.741 user 1m3.640s 00:22:10.741 sys 0m7.283s 00:22:10.741 17:04:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:10.741 17:04:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:10.741 ************************************ 00:22:10.741 END TEST nvmf_perf 00:22:10.741 ************************************ 00:22:10.741 17:04:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:10.741 17:04:16 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:10.741 17:04:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:10.741 17:04:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:10.741 17:04:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:10.741 ************************************ 00:22:10.741 START TEST nvmf_fio_host 00:22:10.741 ************************************ 00:22:10.741 17:04:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:10.741 * Looking for test storage... 00:22:10.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:10.742 17:04:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.742 17:04:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.742 17:04:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.742 17:04:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.742 17:04:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.742 17:04:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.742 17:04:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.742 17:04:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:10.742 17:04:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.742 17:04:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.742 17:04:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:10.742 17:04:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.011 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.011 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:16.011 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:16.011 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:16.011 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:16.011 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:16.011 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:16.011 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:16.011 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:16.011 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:16.012 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:16.012 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:16.012 Found net devices under 0000:86:00.0: cvl_0_0 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:16.012 Found net devices under 0000:86:00.1: cvl_0_1 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:16.012 17:04:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:16.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:22:16.012 00:22:16.012 --- 10.0.0.2 ping statistics --- 00:22:16.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.012 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:22:16.012 00:22:16.012 --- 10.0.0.1 ping statistics --- 00:22:16.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.012 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=169998 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 169998 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 169998 ']' 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.012 17:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.012 [2024-07-15 17:04:22.155081] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:22:16.012 [2024-07-15 17:04:22.155130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.012 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.012 [2024-07-15 17:04:22.211157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.012 [2024-07-15 17:04:22.292367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.012 [2024-07-15 17:04:22.292402] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.012 [2024-07-15 17:04:22.292409] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.012 [2024-07-15 17:04:22.292416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.012 [2024-07-15 17:04:22.292421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.012 [2024-07-15 17:04:22.292461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.012 [2024-07-15 17:04:22.292594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.012 [2024-07-15 17:04:22.292613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.012 [2024-07-15 17:04:22.292614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.581 17:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.581 17:04:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:16.581 17:04:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:16.581 [2024-07-15 17:04:23.103432] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.581 17:04:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:16.581 17:04:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.581 17:04:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.581 17:04:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:16.839 Malloc1 00:22:16.839 17:04:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.098 17:04:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:17.098 17:04:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.356 [2024-07-15 17:04:23.901854] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.356 17:04:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:17.615 17:04:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:17.873 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:17.873 fio-3.35 00:22:17.873 Starting 1 thread 00:22:17.873 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.474 00:22:20.474 test: (groupid=0, jobs=1): err= 0: pid=170445: Mon Jul 15 17:04:26 2024 00:22:20.474 read: IOPS=11.8k, BW=45.9MiB/s (48.2MB/s)(92.1MiB/2005msec) 00:22:20.474 slat (nsec): min=1597, max=252938, avg=1758.31, stdev=2309.30 00:22:20.474 clat (usec): min=3155, max=10735, avg=6019.25, stdev=448.51 00:22:20.474 lat (usec): min=3190, max=10737, avg=6021.01, stdev=448.39 00:22:20.474 clat percentiles (usec): 00:22:20.474 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:22:20.474 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6128], 00:22:20.474 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6521], 95.00th=[ 6718], 00:22:20.474 | 99.00th=[ 6980], 99.50th=[ 7046], 99.90th=[ 8455], 99.95th=[ 9634], 00:22:20.474 | 99.99th=[10290] 00:22:20.474 bw ( KiB/s): min=46000, max=47664, per=100.00%, avg=47050.00, stdev=726.59, samples=4 00:22:20.474 iops : min=11500, max=11916, avg=11762.50, stdev=181.65, samples=4 00:22:20.474 write: IOPS=11.7k, BW=45.7MiB/s (47.9MB/s)(91.6MiB/2005msec); 0 zone resets 00:22:20.474 slat (nsec): min=1652, max=233738, avg=1842.89, stdev=1705.48 00:22:20.474 clat (usec): min=2507, max=9574, avg=4843.76, stdev=380.93 00:22:20.474 lat (usec): min=2523, max=9576, avg=4845.60, stdev=380.87 00:22:20.474 clat percentiles (usec): 00:22:20.474 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 4555], 00:22:20.474 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:22:20.474 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:22:20.474 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 7767], 99.95th=[ 8979], 00:22:20.474 | 99.99th=[ 9503] 00:22:20.474 bw ( KiB/s): min=46432, max=47392, per=99.96%, avg=46768.00, stdev=432.89, samples=4 00:22:20.474 iops : min=11608, max=11848, avg=11692.00, stdev=108.22, samples=4 00:22:20.474 lat (msec) : 4=0.61%, 10=99.37%, 20=0.02% 00:22:20.474 cpu : usr=70.31%, sys=26.80%, ctx=45, majf=0, minf=6 00:22:20.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:20.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:20.474 issued rwts: total=23584,23452,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:20.474 00:22:20.474 Run status group 0 (all jobs): 00:22:20.474 READ: bw=45.9MiB/s (48.2MB/s), 45.9MiB/s-45.9MiB/s (48.2MB/s-48.2MB/s), io=92.1MiB (96.6MB), run=2005-2005msec 00:22:20.474 WRITE: bw=45.7MiB/s (47.9MB/s), 45.7MiB/s-45.7MiB/s (47.9MB/s-47.9MB/s), io=91.6MiB (96.1MB), run=2005-2005msec 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:20.474 17:04:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:20.474 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:20.474 fio-3.35 00:22:20.474 Starting 1 thread 00:22:20.474 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.010 00:22:23.010 test: (groupid=0, jobs=1): err= 0: pid=171016: Mon Jul 15 17:04:29 2024 00:22:23.010 read: IOPS=10.3k, BW=161MiB/s (169MB/s)(323MiB/2005msec) 00:22:23.010 slat (nsec): min=2558, max=86934, avg=2851.15, stdev=1242.97 00:22:23.010 clat (usec): min=1107, max=50795, avg=7497.50, stdev=3607.56 00:22:23.010 lat (usec): min=1110, max=50798, avg=7500.35, stdev=3607.61 00:22:23.010 clat percentiles (usec): 00:22:23.010 | 1.00th=[ 3884], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5669], 00:22:23.010 | 30.00th=[ 6128], 40.00th=[ 6718], 50.00th=[ 7242], 60.00th=[ 7767], 00:22:23.010 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9503], 95.00th=[10552], 00:22:23.010 | 99.00th=[12911], 99.50th=[43779], 99.90th=[49546], 99.95th=[50070], 00:22:23.010 | 99.99th=[50594] 00:22:23.010 bw ( KiB/s): min=74496, max=92992, per=49.68%, avg=81968.00, stdev=7825.58, samples=4 00:22:23.010 iops : min= 4656, max= 5812, avg=5123.00, stdev=489.10, samples=4 00:22:23.010 write: IOPS=6222, BW=97.2MiB/s (102MB/s)(167MiB/1720msec); 0 zone resets 00:22:23.010 slat (usec): min=29, max=251, avg=31.88, stdev= 5.44 00:22:23.010 clat (usec): min=3810, max=14499, avg=8525.81, stdev=1550.13 00:22:23.010 lat (usec): min=3841, max=14530, avg=8557.69, stdev=1550.79 00:22:23.010 clat percentiles (usec): 00:22:23.010 | 1.00th=[ 5538], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7242], 00:22:23.010 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8717], 00:22:23.010 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11469], 00:22:23.010 | 99.00th=[12649], 99.50th=[13173], 99.90th=[13960], 99.95th=[14091], 00:22:23.010 | 99.99th=[14222] 00:22:23.010 bw ( KiB/s): min=78912, max=96064, per=85.32%, avg=84944.00, stdev=7593.67, samples=4 00:22:23.010 iops : min= 4932, max= 6004, avg=5309.00, stdev=474.60, samples=4 00:22:23.010 lat (msec) : 2=0.05%, 4=0.96%, 10=88.22%, 20=10.36%, 50=0.36% 00:22:23.010 lat (msec) : 100=0.05% 00:22:23.010 cpu : usr=84.38%, sys=14.27%, ctx=39, majf=0, minf=3 00:22:23.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:23.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:23.010 issued rwts: total=20677,10702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:23.010 00:22:23.010 Run status group 0 (all jobs): 00:22:23.010 READ: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=323MiB (339MB), run=2005-2005msec 00:22:23.010 WRITE: bw=97.2MiB/s (102MB/s), 97.2MiB/s-97.2MiB/s (102MB/s-102MB/s), io=167MiB (175MB), run=1720-1720msec 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.010 rmmod nvme_tcp 00:22:23.010 rmmod nvme_fabrics 00:22:23.010 rmmod nvme_keyring 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 169998 ']' 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 169998 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 169998 ']' 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 169998 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 169998 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 169998' 00:22:23.010 killing process with pid 169998 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 169998 00:22:23.010 17:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 169998 00:22:23.269 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.269 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.269 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.269 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.269 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.269 17:04:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.269 17:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.269 17:04:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.798 17:04:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:25.798 00:22:25.798 real 0m14.935s 00:22:25.798 user 0m46.273s 00:22:25.798 sys 0m5.770s 00:22:25.798 17:04:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:25.798 17:04:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.798 ************************************ 00:22:25.798 END TEST nvmf_fio_host 00:22:25.798 ************************************ 00:22:25.798 17:04:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:25.798 17:04:31 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:25.798 17:04:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:25.798 17:04:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:25.798 17:04:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:25.798 ************************************ 00:22:25.798 START TEST nvmf_failover 00:22:25.798 ************************************ 00:22:25.798 17:04:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:25.798 * Looking for test storage... 00:22:25.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:25.798 17:04:32 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:25.799 17:04:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:31.069 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.069 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:31.070 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:31.070 Found net devices under 0000:86:00.0: cvl_0_0 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:31.070 Found net devices under 0000:86:00.1: cvl_0_1 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:31.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:22:31.070 00:22:31.070 --- 10.0.0.2 ping statistics --- 00:22:31.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.070 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:22:31.070 00:22:31.070 --- 10.0.0.1 ping statistics --- 00:22:31.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.070 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=174791 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 174791 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 174791 ']' 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.070 17:04:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:31.070 [2024-07-15 17:04:37.431935] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:22:31.070 [2024-07-15 17:04:37.431978] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.070 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.070 [2024-07-15 17:04:37.488789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:31.070 [2024-07-15 17:04:37.568418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.070 [2024-07-15 17:04:37.568454] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.070 [2024-07-15 17:04:37.568462] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.070 [2024-07-15 17:04:37.568468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.070 [2024-07-15 17:04:37.568473] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.070 [2024-07-15 17:04:37.568571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.070 [2024-07-15 17:04:37.568657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.070 [2024-07-15 17:04:37.568659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.637 17:04:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.637 17:04:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:31.637 17:04:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.637 17:04:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:31.637 17:04:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:31.637 17:04:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.637 17:04:38 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:31.896 [2024-07-15 17:04:38.433018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.896 17:04:38 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:32.155 Malloc0 00:22:32.155 17:04:38 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:32.414 17:04:38 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:32.414 17:04:39 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.672 [2024-07-15 17:04:39.185526] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.672 17:04:39 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:32.931 [2024-07-15 17:04:39.362019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:32.931 17:04:39 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:32.931 [2024-07-15 17:04:39.538586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:32.931 17:04:39 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:32.931 17:04:39 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=175238 00:22:32.931 17:04:39 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.931 17:04:39 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 175238 /var/tmp/bdevperf.sock 00:22:32.931 17:04:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 175238 ']' 00:22:32.931 17:04:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.931 17:04:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:32.931 17:04:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.931 17:04:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:32.931 17:04:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:33.867 17:04:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.867 17:04:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:33.867 17:04:40 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:34.126 NVMe0n1 00:22:34.126 17:04:40 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:34.385 00:22:34.385 17:04:41 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:34.385 17:04:41 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=175479 00:22:34.385 17:04:41 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:35.763 17:04:42 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.763 [2024-07-15 17:04:42.171211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca080 is same with the state(5) to be set 00:22:35.763 [2024-07-15 17:04:42.171261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca080 is same with the state(5) to be set 00:22:35.763 [2024-07-15 17:04:42.171269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca080 is same with the state(5) to be set 00:22:35.763 [2024-07-15 17:04:42.171276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca080 is same with the state(5) to be set 00:22:35.763 [2024-07-15 17:04:42.171283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca080 is same with the state(5) to be set 00:22:35.763 [2024-07-15 17:04:42.171289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca080 is same with the state(5) to be set 00:22:35.763 [2024-07-15 17:04:42.171295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca080 is same with the state(5) to be set 00:22:35.763 [2024-07-15 17:04:42.171301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca080 is same with the state(5) to be set 00:22:35.763 [2024-07-15 17:04:42.171307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca080 is same with the state(5) to be set 00:22:35.763 17:04:42 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:39.053 17:04:45 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:39.053 00:22:39.053 17:04:45 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:39.053 17:04:45 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:42.411 17:04:48 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:42.411 [2024-07-15 17:04:48.845293] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.411 17:04:48 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:43.349 17:04:49 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:43.608 17:04:50 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 175479 00:22:50.179 0 00:22:50.179 17:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 175238 00:22:50.179 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 175238 ']' 00:22:50.179 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 175238 00:22:50.179 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:50.179 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.179 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 175238 00:22:50.179 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:50.179 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:50.179 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 175238' 00:22:50.179 killing process with pid 175238 00:22:50.179 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 175238 00:22:50.179 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 175238 00:22:50.179 17:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:50.179 [2024-07-15 17:04:39.598005] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:22:50.179 [2024-07-15 17:04:39.598060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175238 ] 00:22:50.179 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.179 [2024-07-15 17:04:39.652079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.179 [2024-07-15 17:04:39.728534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.179 Running I/O for 15 seconds... 00:22:50.179 [2024-07-15 17:04:42.172781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.172819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.172836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.172844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.172853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.172861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.172869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.172876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.172884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.172891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.172899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.172906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.172914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.172920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.172928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.172935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.172943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.172949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.172957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.172963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.172971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.172978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.172992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.172998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.173014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.173029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.179 [2024-07-15 17:04:42.173044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.179 [2024-07-15 17:04:42.173282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.179 [2024-07-15 17:04:42.173290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.180 [2024-07-15 17:04:42.173895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.180 [2024-07-15 17:04:42.173902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.173911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.173917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.173925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.173932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.173940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.173946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.173954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.173960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.173968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.173974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.173982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.173988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.173996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.181 [2024-07-15 17:04:42.174520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.181 [2024-07-15 17:04:42.174527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:42.174541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:42.174555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:42.174568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:42.174583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:42.174597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:42.174611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:42.174625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:42.174641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:42.174655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:42.174668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:50.182 [2024-07-15 17:04:42.174699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:50.182 [2024-07-15 17:04:42.174705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96704 len:8 PRP1 0x0 PRP2 0x0 00:22:50.182 [2024-07-15 17:04:42.174715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174764] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22dc300 was disconnected and freed. reset controller. 00:22:50.182 [2024-07-15 17:04:42.174772] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:50.182 [2024-07-15 17:04:42.174792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.182 [2024-07-15 17:04:42.174799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.182 [2024-07-15 17:04:42.174813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.182 [2024-07-15 17:04:42.174826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.182 [2024-07-15 17:04:42.174840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:42.174846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.182 [2024-07-15 17:04:42.174880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22be540 (9): Bad file descriptor 00:22:50.182 [2024-07-15 17:04:42.177692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.182 [2024-07-15 17:04:42.212605] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:50.182 [2024-07-15 17:04:45.629998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.182 [2024-07-15 17:04:45.630038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.182 [2024-07-15 17:04:45.630405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.182 [2024-07-15 17:04:45.630412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.183 [2024-07-15 17:04:45.630752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.183 [2024-07-15 17:04:45.630767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.183 [2024-07-15 17:04:45.630781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.183 [2024-07-15 17:04:45.630795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.183 [2024-07-15 17:04:45.630811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.183 [2024-07-15 17:04:45.630825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.183 [2024-07-15 17:04:45.630839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.183 [2024-07-15 17:04:45.630875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.183 [2024-07-15 17:04:45.630881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.630889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.630895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.630903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.630909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.630917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.630923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.630931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.630937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.630946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.630952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.630960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.630966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.630974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.630980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.630989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.630996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.631011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.631025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.631039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.631053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.631067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.631081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.184 [2024-07-15 17:04:45.631443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.631457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.631472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.631485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.184 [2024-07-15 17:04:45.631493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.184 [2024-07-15 17:04:45.631499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:45.631513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:45.631529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:45.631545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:45.631559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:45.631573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:45.631586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:45.631600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:45.631615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:45.631629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:45.631643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:45.631658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:45.631672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:45.631874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:50.185 [2024-07-15 17:04:45.631898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:50.185 [2024-07-15 17:04:45.631903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20472 len:8 PRP1 0x0 PRP2 0x0 00:22:50.185 [2024-07-15 17:04:45.631911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631953] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2489380 was disconnected and freed. reset controller. 00:22:50.185 [2024-07-15 17:04:45.631961] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:50.185 [2024-07-15 17:04:45.631982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.185 [2024-07-15 17:04:45.631989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.631996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.185 [2024-07-15 17:04:45.632002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.632009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.185 [2024-07-15 17:04:45.632016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.632022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.185 [2024-07-15 17:04:45.632029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:45.632035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.185 [2024-07-15 17:04:45.634863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.185 [2024-07-15 17:04:45.634890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22be540 (9): Bad file descriptor 00:22:50.185 [2024-07-15 17:04:45.744751] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:50.185 [2024-07-15 17:04:50.042281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.185 [2024-07-15 17:04:50.042322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:50.042337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:50.042345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:50.042354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:50.042360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:50.042369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:50.042375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:50.042383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:50.042390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:50.042397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:50.042404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:50.042417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:50.042423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:50.042431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.185 [2024-07-15 17:04:50.042438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.185 [2024-07-15 17:04:50.042445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.042988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.042994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.043002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.043008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.043016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.043022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.043030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.043036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.043044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.043050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.186 [2024-07-15 17:04:50.043058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.186 [2024-07-15 17:04:50.043064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.187 [2024-07-15 17:04:50.043306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.187 [2024-07-15 17:04:50.043315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.188 [2024-07-15 17:04:50.043549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.188 [2024-07-15 17:04:50.043563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.188 [2024-07-15 17:04:50.043577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.188 [2024-07-15 17:04:50.043591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.188 [2024-07-15 17:04:50.043604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.188 [2024-07-15 17:04:50.043618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.188 [2024-07-15 17:04:50.043633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.188 [2024-07-15 17:04:50.043647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.188 [2024-07-15 17:04:50.043661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.188 [2024-07-15 17:04:50.043676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.188 [2024-07-15 17:04:50.043690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.188 [2024-07-15 17:04:50.043774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.188 [2024-07-15 17:04:50.043782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.189 [2024-07-15 17:04:50.043788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.043987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.043994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.044001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.044015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.044031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.044045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.189 [2024-07-15 17:04:50.044059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.189 [2024-07-15 17:04:50.044074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.189 [2024-07-15 17:04:50.044088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.189 [2024-07-15 17:04:50.044101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.189 [2024-07-15 17:04:50.044116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.189 [2024-07-15 17:04:50.044130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:50.189 [2024-07-15 17:04:50.044155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:50.189 [2024-07-15 17:04:50.044161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48744 len:8 PRP1 0x0 PRP2 0x0 00:22:50.189 [2024-07-15 17:04:50.044167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044208] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2489170 was disconnected and freed. reset controller. 00:22:50.189 [2024-07-15 17:04:50.044217] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:50.189 [2024-07-15 17:04:50.044240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.189 [2024-07-15 17:04:50.044247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.189 [2024-07-15 17:04:50.044261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.189 [2024-07-15 17:04:50.044276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.189 [2024-07-15 17:04:50.044289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.189 [2024-07-15 17:04:50.044295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.189 [2024-07-15 17:04:50.047111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.189 [2024-07-15 17:04:50.047137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22be540 (9): Bad file descriptor 00:22:50.189 [2024-07-15 17:04:50.116457] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:50.189 00:22:50.189 Latency(us) 00:22:50.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.189 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:50.189 Verification LBA range: start 0x0 length 0x4000 00:22:50.189 NVMe0n1 : 15.00 10906.08 42.60 646.04 0.00 11057.61 605.50 12822.26 00:22:50.189 =================================================================================================================== 00:22:50.189 Total : 10906.08 42.60 646.04 0.00 11057.61 605.50 12822.26 00:22:50.189 Received shutdown signal, test time was about 15.000000 seconds 00:22:50.189 00:22:50.189 Latency(us) 00:22:50.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.189 =================================================================================================================== 00:22:50.189 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.189 17:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:50.189 17:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:50.189 17:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:50.189 17:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=177998 00:22:50.189 17:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:50.189 17:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 177998 /var/tmp/bdevperf.sock 00:22:50.189 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 177998 ']' 00:22:50.189 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.189 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.189 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.189 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.189 17:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.753 17:04:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.754 17:04:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:50.754 17:04:57 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:50.754 [2024-07-15 17:04:57.401213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:51.010 17:04:57 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:51.010 [2024-07-15 17:04:57.593774] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:51.010 17:04:57 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:51.266 NVMe0n1 00:22:51.266 17:04:57 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:51.832 00:22:51.832 17:04:58 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:52.090 00:22:52.090 17:04:58 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:52.090 17:04:58 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:52.348 17:04:58 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:52.348 17:04:58 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:55.633 17:05:01 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.633 17:05:01 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:55.633 17:05:02 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:55.633 17:05:02 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=178922 00:22:55.633 17:05:02 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 178922 00:22:57.010 0 00:22:57.010 17:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:57.010 [2024-07-15 17:04:56.431164] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:22:57.010 [2024-07-15 17:04:56.431213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177998 ] 00:22:57.010 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.010 [2024-07-15 17:04:56.484887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.010 [2024-07-15 17:04:56.555258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.010 [2024-07-15 17:04:58.940624] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:57.010 [2024-07-15 17:04:58.940673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.010 [2024-07-15 17:04:58.940684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.010 [2024-07-15 17:04:58.940692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.010 [2024-07-15 17:04:58.940699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.010 [2024-07-15 17:04:58.940707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.010 [2024-07-15 17:04:58.940714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.010 [2024-07-15 17:04:58.940723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.010 [2024-07-15 17:04:58.940729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.010 [2024-07-15 17:04:58.940736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:57.010 [2024-07-15 17:04:58.940763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:57.010 [2024-07-15 17:04:58.940776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204b540 (9): Bad file descriptor 00:22:57.010 [2024-07-15 17:04:58.951561] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:57.010 Running I/O for 1 seconds... 00:22:57.010 00:22:57.010 Latency(us) 00:22:57.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.010 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:57.010 Verification LBA range: start 0x0 length 0x4000 00:22:57.010 NVMe0n1 : 1.00 11057.22 43.19 0.00 0.00 11532.56 2478.97 12195.39 00:22:57.010 =================================================================================================================== 00:22:57.010 Total : 11057.22 43.19 0.00 0.00 11532.56 2478.97 12195.39 00:22:57.010 17:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:57.010 17:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:57.010 17:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:57.010 17:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:57.011 17:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:57.268 17:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:57.527 17:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:00.811 17:05:06 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.811 17:05:06 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:00.811 17:05:07 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 177998 00:23:00.811 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 177998 ']' 00:23:00.811 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 177998 00:23:00.811 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:00.811 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:00.811 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 177998 00:23:00.811 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:00.811 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:00.811 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 177998' 00:23:00.811 killing process with pid 177998 00:23:00.811 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 177998 00:23:00.811 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 177998 00:23:00.811 17:05:07 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:00.811 17:05:07 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:01.070 rmmod nvme_tcp 00:23:01.070 rmmod nvme_fabrics 00:23:01.070 rmmod nvme_keyring 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 174791 ']' 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 174791 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 174791 ']' 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 174791 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 174791 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 174791' 00:23:01.070 killing process with pid 174791 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 174791 00:23:01.070 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 174791 00:23:01.329 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:01.329 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:01.329 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:01.329 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:01.329 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:01.329 17:05:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.329 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:01.329 17:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.865 17:05:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:03.865 00:23:03.865 real 0m38.025s 00:23:03.865 user 2m2.954s 00:23:03.865 sys 0m7.333s 00:23:03.865 17:05:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:03.865 17:05:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:03.865 ************************************ 00:23:03.865 END TEST nvmf_failover 00:23:03.865 ************************************ 00:23:03.865 17:05:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:03.865 17:05:09 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:03.865 17:05:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:03.865 17:05:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:03.865 17:05:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:03.865 ************************************ 00:23:03.865 START TEST nvmf_host_discovery 00:23:03.865 ************************************ 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:03.865 * Looking for test storage... 00:23:03.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:03.865 17:05:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:09.197 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.197 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:09.197 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:09.198 Found net devices under 0000:86:00.0: cvl_0_0 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:09.198 Found net devices under 0000:86:00.1: cvl_0_1 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:09.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:23:09.198 00:23:09.198 --- 10.0.0.2 ping statistics --- 00:23:09.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.198 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:23:09.198 00:23:09.198 --- 10.0.0.1 ping statistics --- 00:23:09.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.198 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=183143 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 183143 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 183143 ']' 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.198 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.198 [2024-07-15 17:05:15.438892] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:23:09.198 [2024-07-15 17:05:15.438937] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.198 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.198 [2024-07-15 17:05:15.493737] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.198 [2024-07-15 17:05:15.571994] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.198 [2024-07-15 17:05:15.572030] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.198 [2024-07-15 17:05:15.572036] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.199 [2024-07-15 17:05:15.572042] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.199 [2024-07-15 17:05:15.572047] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.199 [2024-07-15 17:05:15.572064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.766 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.766 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:09.766 17:05:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.766 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.766 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.766 17:05:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.766 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.767 [2024-07-15 17:05:16.273672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.767 [2024-07-15 17:05:16.281798] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.767 null0 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.767 null1 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=183385 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 183385 /tmp/host.sock 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 183385 ']' 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:09.767 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.767 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.767 [2024-07-15 17:05:16.356486] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:23:09.767 [2024-07-15 17:05:16.356528] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid183385 ] 00:23:09.767 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.767 [2024-07-15 17:05:16.409543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.026 [2024-07-15 17:05:16.489307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:10.594 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.853 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.854 [2024-07-15 17:05:17.492998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.854 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.113 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:11.113 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:11.113 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:11.113 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:11.113 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:11.113 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:11.113 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:11.114 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:11.682 [2024-07-15 17:05:18.191397] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:11.682 [2024-07-15 17:05:18.191416] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:11.682 [2024-07-15 17:05:18.191428] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:11.682 [2024-07-15 17:05:18.279705] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:11.941 [2024-07-15 17:05:18.506974] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:11.941 [2024-07-15 17:05:18.506992] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.200 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:12.463 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.464 [2024-07-15 17:05:18.973016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:12.464 [2024-07-15 17:05:18.973394] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:12.464 [2024-07-15 17:05:18.973415] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:12.464 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.464 [2024-07-15 17:05:19.060674] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:12.464 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:12.722 [2024-07-15 17:05:19.328718] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:12.722 [2024-07-15 17:05:19.328735] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:12.722 [2024-07-15 17:05:19.328740] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.660 [2024-07-15 17:05:20.213444] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:13.660 [2024-07-15 17:05:20.213469] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.660 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:13.660 [2024-07-15 17:05:20.219016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.660 [2024-07-15 17:05:20.219035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.660 [2024-07-15 17:05:20.219044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.660 [2024-07-15 17:05:20.219051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.661 [2024-07-15 17:05:20.219058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.661 [2024-07-15 17:05:20.219065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.661 [2024-07-15 17:05:20.219072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.661 [2024-07-15 17:05:20.219078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.661 [2024-07-15 17:05:20.219085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf69f10 is same with the state(5) to be set 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.661 [2024-07-15 17:05:20.229030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf69f10 (9): Bad file descriptor 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.661 [2024-07-15 17:05:20.239068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:13.661 [2024-07-15 17:05:20.239403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.661 [2024-07-15 17:05:20.239419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf69f10 with addr=10.0.0.2, port=4420 00:23:13.661 [2024-07-15 17:05:20.239426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf69f10 is same with the state(5) to be set 00:23:13.661 [2024-07-15 17:05:20.239438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf69f10 (9): Bad file descriptor 00:23:13.661 [2024-07-15 17:05:20.239448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:13.661 [2024-07-15 17:05:20.239455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:13.661 [2024-07-15 17:05:20.239463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:13.661 [2024-07-15 17:05:20.239477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.661 [2024-07-15 17:05:20.249122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:13.661 [2024-07-15 17:05:20.249393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.661 [2024-07-15 17:05:20.249406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf69f10 with addr=10.0.0.2, port=4420 00:23:13.661 [2024-07-15 17:05:20.249413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf69f10 is same with the state(5) to be set 00:23:13.661 [2024-07-15 17:05:20.249423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf69f10 (9): Bad file descriptor 00:23:13.661 [2024-07-15 17:05:20.249434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:13.661 [2024-07-15 17:05:20.249440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:13.661 [2024-07-15 17:05:20.249447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:13.661 [2024-07-15 17:05:20.249456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.661 [2024-07-15 17:05:20.259174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:13.661 [2024-07-15 17:05:20.259450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.661 [2024-07-15 17:05:20.259465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf69f10 with addr=10.0.0.2, port=4420 00:23:13.661 [2024-07-15 17:05:20.259473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf69f10 is same with the state(5) to be set 00:23:13.661 [2024-07-15 17:05:20.259484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf69f10 (9): Bad file descriptor 00:23:13.661 [2024-07-15 17:05:20.259494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:13.661 [2024-07-15 17:05:20.259500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:13.661 [2024-07-15 17:05:20.259507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:13.661 [2024-07-15 17:05:20.259516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.661 [2024-07-15 17:05:20.269231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:13.661 [2024-07-15 17:05:20.269433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.661 [2024-07-15 17:05:20.269446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf69f10 with addr=10.0.0.2, port=4420 00:23:13.661 [2024-07-15 17:05:20.269453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf69f10 is same with the state(5) to be set 00:23:13.661 [2024-07-15 17:05:20.269465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf69f10 (9): Bad file descriptor 00:23:13.661 [2024-07-15 17:05:20.269474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:13.661 [2024-07-15 17:05:20.269480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.661 [2024-07-15 17:05:20.269487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:13.661 [2024-07-15 17:05:20.269500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.661 [2024-07-15 17:05:20.279281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:13.661 [2024-07-15 17:05:20.279476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.661 [2024-07-15 17:05:20.279488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf69f10 with addr=10.0.0.2, port=4420 00:23:13.661 [2024-07-15 17:05:20.279495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf69f10 is same with the state(5) to be set 00:23:13.661 [2024-07-15 17:05:20.279506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf69f10 (9): Bad file descriptor 00:23:13.661 [2024-07-15 17:05:20.279515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:13.661 [2024-07-15 17:05:20.279521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:13.661 [2024-07-15 17:05:20.279528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:13.661 [2024-07-15 17:05:20.279538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.661 [2024-07-15 17:05:20.289332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:13.661 [2024-07-15 17:05:20.289482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.661 [2024-07-15 17:05:20.289494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf69f10 with addr=10.0.0.2, port=4420 00:23:13.661 [2024-07-15 17:05:20.289501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf69f10 is same with the state(5) to be set 00:23:13.661 [2024-07-15 17:05:20.289512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf69f10 (9): Bad file descriptor 00:23:13.661 [2024-07-15 17:05:20.289522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:13.661 [2024-07-15 17:05:20.289528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:13.661 [2024-07-15 17:05:20.289535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:13.661 [2024-07-15 17:05:20.289544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.661 [2024-07-15 17:05:20.299385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:13.661 [2024-07-15 17:05:20.299535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.661 [2024-07-15 17:05:20.299548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf69f10 with addr=10.0.0.2, port=4420 00:23:13.661 [2024-07-15 17:05:20.299555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf69f10 is same with the state(5) to be set 00:23:13.661 [2024-07-15 17:05:20.299568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf69f10 (9): Bad file descriptor 00:23:13.661 [2024-07-15 17:05:20.299578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:13.661 [2024-07-15 17:05:20.299585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:13.661 [2024-07-15 17:05:20.299591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:13.661 [2024-07-15 17:05:20.299601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.661 [2024-07-15 17:05:20.299955] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:13.661 [2024-07-15 17:05:20.299969] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:13.661 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:13.921 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.922 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.300 [2024-07-15 17:05:21.630801] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:15.300 [2024-07-15 17:05:21.630818] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:15.300 [2024-07-15 17:05:21.630828] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:15.300 [2024-07-15 17:05:21.759226] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:15.300 [2024-07-15 17:05:21.825634] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:15.300 [2024-07-15 17:05:21.825662] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.300 request: 00:23:15.300 { 00:23:15.300 "name": "nvme", 00:23:15.300 "trtype": "tcp", 00:23:15.300 "traddr": "10.0.0.2", 00:23:15.300 "adrfam": "ipv4", 00:23:15.300 "trsvcid": "8009", 00:23:15.300 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:15.300 "wait_for_attach": true, 00:23:15.300 "method": "bdev_nvme_start_discovery", 00:23:15.300 "req_id": 1 00:23:15.300 } 00:23:15.300 Got JSON-RPC error response 00:23:15.300 response: 00:23:15.300 { 00:23:15.300 "code": -17, 00:23:15.300 "message": "File exists" 00:23:15.300 } 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.300 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.300 request: 00:23:15.300 { 00:23:15.300 "name": "nvme_second", 00:23:15.300 "trtype": "tcp", 00:23:15.300 "traddr": "10.0.0.2", 00:23:15.300 "adrfam": "ipv4", 00:23:15.300 "trsvcid": "8009", 00:23:15.300 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:15.300 "wait_for_attach": true, 00:23:15.300 "method": "bdev_nvme_start_discovery", 00:23:15.300 "req_id": 1 00:23:15.300 } 00:23:15.300 Got JSON-RPC error response 00:23:15.300 response: 00:23:15.300 { 00:23:15.301 "code": -17, 00:23:15.301 "message": "File exists" 00:23:15.301 } 00:23:15.301 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:15.301 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:15.301 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:15.301 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:15.301 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:15.301 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:15.301 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:15.301 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:15.301 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:15.301 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.301 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.301 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:15.301 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.560 17:05:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.497 [2024-07-15 17:05:23.065787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.497 [2024-07-15 17:05:23.065814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa6a00 with addr=10.0.0.2, port=8010 00:23:16.497 [2024-07-15 17:05:23.065826] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:16.497 [2024-07-15 17:05:23.065832] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:16.497 [2024-07-15 17:05:23.065838] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:17.431 [2024-07-15 17:05:24.068279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.431 [2024-07-15 17:05:24.068303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa6a00 with addr=10.0.0.2, port=8010 00:23:17.431 [2024-07-15 17:05:24.068317] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:17.431 [2024-07-15 17:05:24.068323] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:17.431 [2024-07-15 17:05:24.068329] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:18.808 [2024-07-15 17:05:25.070433] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:18.808 request: 00:23:18.808 { 00:23:18.808 "name": "nvme_second", 00:23:18.808 "trtype": "tcp", 00:23:18.808 "traddr": "10.0.0.2", 00:23:18.808 "adrfam": "ipv4", 00:23:18.808 "trsvcid": "8010", 00:23:18.808 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:18.808 "wait_for_attach": false, 00:23:18.808 "attach_timeout_ms": 3000, 00:23:18.808 "method": "bdev_nvme_start_discovery", 00:23:18.808 "req_id": 1 00:23:18.808 } 00:23:18.808 Got JSON-RPC error response 00:23:18.808 response: 00:23:18.808 { 00:23:18.808 "code": -110, 00:23:18.808 "message": "Connection timed out" 00:23:18.808 } 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 183385 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:18.808 rmmod nvme_tcp 00:23:18.808 rmmod nvme_fabrics 00:23:18.808 rmmod nvme_keyring 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 183143 ']' 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 183143 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 183143 ']' 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 183143 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 183143 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 183143' 00:23:18.808 killing process with pid 183143 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 183143 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 183143 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.808 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:21.346 00:23:21.346 real 0m17.449s 00:23:21.346 user 0m21.997s 00:23:21.346 sys 0m5.208s 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.346 ************************************ 00:23:21.346 END TEST nvmf_host_discovery 00:23:21.346 ************************************ 00:23:21.346 17:05:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:21.346 17:05:27 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:21.346 17:05:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:21.346 17:05:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:21.346 17:05:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:21.346 ************************************ 00:23:21.346 START TEST nvmf_host_multipath_status 00:23:21.346 ************************************ 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:21.346 * Looking for test storage... 00:23:21.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:21.346 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:26.658 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:26.658 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:26.658 Found net devices under 0000:86:00.0: cvl_0_0 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:26.658 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:26.659 Found net devices under 0000:86:00.1: cvl_0_1 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.659 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:26.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:23:26.659 00:23:26.659 --- 10.0.0.2 ping statistics --- 00:23:26.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.659 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:23:26.659 00:23:26.659 --- 10.0.0.1 ping statistics --- 00:23:26.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.659 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=188457 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 188457 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 188457 ']' 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.659 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:26.659 [2024-07-15 17:05:33.245425] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:23:26.659 [2024-07-15 17:05:33.245470] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.659 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.659 [2024-07-15 17:05:33.300973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:26.918 [2024-07-15 17:05:33.381059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.918 [2024-07-15 17:05:33.381094] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.918 [2024-07-15 17:05:33.381101] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.918 [2024-07-15 17:05:33.381108] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.918 [2024-07-15 17:05:33.381113] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.918 [2024-07-15 17:05:33.381148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.919 [2024-07-15 17:05:33.381151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.485 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.485 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:27.485 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.485 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:27.485 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:27.486 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.486 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=188457 00:23:27.486 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:27.744 [2024-07-15 17:05:34.236432] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.744 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:28.003 Malloc0 00:23:28.003 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:28.003 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:28.262 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.520 [2024-07-15 17:05:34.961137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.520 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:28.520 [2024-07-15 17:05:35.129542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:28.520 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=188717 00:23:28.520 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.520 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:28.520 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 188717 /var/tmp/bdevperf.sock 00:23:28.520 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 188717 ']' 00:23:28.520 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.520 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:28.520 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.520 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:28.520 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:29.454 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.454 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:29.454 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:29.712 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:29.969 Nvme0n1 00:23:29.969 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:30.228 Nvme0n1 00:23:30.487 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:30.487 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:32.391 17:05:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:32.391 17:05:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:32.650 17:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:32.910 17:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:33.850 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:33.850 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:33.850 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.850 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:34.109 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.109 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:34.109 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.109 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:34.109 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:34.109 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:34.109 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.109 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:34.367 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.367 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:34.367 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.367 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:34.625 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.625 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:34.625 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.625 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:34.625 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.626 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:34.626 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.626 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:34.884 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.884 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:34.884 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:35.144 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:35.403 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:36.339 17:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:36.339 17:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:36.339 17:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.339 17:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:36.596 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:36.596 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:36.596 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.596 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:36.596 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.597 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:36.597 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.597 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:36.854 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.854 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:36.854 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.854 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:37.112 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.112 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:37.112 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.112 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:37.112 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.112 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:37.112 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.112 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:37.371 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.371 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:37.371 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:37.629 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:37.888 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:38.824 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:38.824 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:38.824 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.824 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:39.082 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.082 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:39.082 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.082 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.082 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:39.082 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.082 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.082 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:39.341 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.341 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:39.341 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.341 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:39.599 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.599 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:39.599 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.599 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:39.599 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.599 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:39.599 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.599 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:39.858 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.858 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:39.858 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:40.116 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:40.375 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:41.310 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:41.310 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:41.310 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.310 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:41.582 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.582 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:41.582 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.582 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:41.582 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:41.582 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:41.582 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.582 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:41.861 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.861 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:41.861 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.861 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:41.861 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.861 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:42.120 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.120 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:42.120 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.120 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:42.120 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.120 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:42.379 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:42.379 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:42.379 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:42.638 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:42.638 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:44.013 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:44.013 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:44.013 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:44.013 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.013 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.013 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:44.013 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.013 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:44.013 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.013 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:44.013 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.013 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:44.271 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.271 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:44.271 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.271 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:44.530 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.530 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:44.530 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.530 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:44.530 17:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.530 17:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:44.530 17:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.530 17:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:44.788 17:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.788 17:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:44.788 17:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:45.047 17:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:45.305 17:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:46.242 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:46.242 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:46.242 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.242 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:46.242 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:46.242 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:46.242 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.242 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:46.501 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.501 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:46.501 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.501 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:46.761 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.761 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:46.761 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.761 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:47.019 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.019 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:47.019 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:47.019 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.019 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.019 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:47.019 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.019 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:47.276 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.276 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:47.533 17:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:47.533 17:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:47.791 17:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:47.791 17:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:49.165 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:49.165 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:49.165 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.165 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:49.165 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.165 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:49.165 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.165 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:49.165 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.165 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:49.165 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.165 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:49.422 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.422 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:49.422 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.422 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:49.680 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.680 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:49.680 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.680 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:49.680 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.680 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:49.680 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.680 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:49.938 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.938 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:49.938 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:50.195 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:50.453 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:51.389 17:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:51.389 17:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:51.389 17:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.389 17:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:51.648 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:51.648 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:51.648 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.648 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:51.906 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.906 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:51.906 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.906 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:51.906 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.906 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:51.906 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.906 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:52.165 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.165 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:52.165 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.165 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:52.424 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.424 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:52.424 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.424 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:52.424 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.424 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:52.424 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:52.683 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:52.941 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:53.877 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:53.877 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:53.877 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:53.877 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.136 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.136 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:54.136 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.136 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:54.394 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.394 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:54.394 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.394 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:54.394 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.394 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:54.394 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.394 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:54.653 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.653 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:54.653 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.653 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:54.911 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.911 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:54.912 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.912 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:54.912 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.912 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:54.912 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:55.170 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:55.429 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:56.365 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:56.365 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:56.365 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.365 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:56.624 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.624 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:56.624 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.624 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:56.907 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:56.907 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:56.907 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.907 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:56.907 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.907 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:56.907 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.907 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:57.164 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.164 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:57.164 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.164 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:57.422 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.422 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:57.422 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:57.422 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.422 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.422 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 188717 00:23:57.422 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 188717 ']' 00:23:57.422 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 188717 00:23:57.422 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:57.422 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:57.422 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 188717 00:23:57.422 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:57.422 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:57.422 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 188717' 00:23:57.422 killing process with pid 188717 00:23:57.422 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 188717 00:23:57.422 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 188717 00:23:57.703 Connection closed with partial response: 00:23:57.703 00:23:57.703 00:23:57.703 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 188717 00:23:57.703 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:57.703 [2024-07-15 17:05:35.203691] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:23:57.703 [2024-07-15 17:05:35.203742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188717 ] 00:23:57.703 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.703 [2024-07-15 17:05:35.255536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.703 [2024-07-15 17:05:35.333444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.703 Running I/O for 90 seconds... 00:23:57.703 [2024-07-15 17:05:49.078004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.703 [2024-07-15 17:05:49.078559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.703 [2024-07-15 17:05:49.078566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.078992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.078998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.704 [2024-07-15 17:05:49.079401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.704 [2024-07-15 17:05:49.079413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.079420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.079814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.079826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.079840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.079847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.079861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.079868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.079880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.079887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.079898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.079905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.079917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.079924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.079936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.079943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.079954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.079960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.079975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.079982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.079994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.705 [2024-07-15 17:05:49.080588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.705 [2024-07-15 17:05:49.080600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.080607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.080626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.080645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.080665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.080685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.080705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.080724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.080743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.080762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.080782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.080803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.080824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.080844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.080863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.080884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.080903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.080922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.080941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.080963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.080982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.080995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.081002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.081021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.081041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.706 [2024-07-15 17:05:49.081591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.706 [2024-07-15 17:05:49.081886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.706 [2024-07-15 17:05:49.081894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.081906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.081913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.081925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.081932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.081944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.081954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.081966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.081973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.081986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.081993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.082504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.082511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.093059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.093071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.093543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.707 [2024-07-15 17:05:49.093559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.707 [2024-07-15 17:05:49.093574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.093982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.093995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.708 [2024-07-15 17:05:49.094378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.708 [2024-07-15 17:05:49.094384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.094818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.094839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.094857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.094879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.094898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.094920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.094939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.094959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.094979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.094991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.094998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.095011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.095018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.095031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.095037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.095049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.095058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.095070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.095078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.095090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.095097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.095740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.095754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.095769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.095777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.095789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.709 [2024-07-15 17:05:49.095799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.709 [2024-07-15 17:05:49.095813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.709 [2024-07-15 17:05:49.095820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.095833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.095840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.095853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.095860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.095872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.095879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.095891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.095899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.095911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.095918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.095931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.095939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.095951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.095959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.095971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.095978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.095991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.095998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.710 [2024-07-15 17:05:49.096626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.710 [2024-07-15 17:05:49.096639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.096646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.096658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.096666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.096680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.096687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.096699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.096705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.096729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.096736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.097984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.097991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.098003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.098010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.098022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.106452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.106469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.106477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.106489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.711 [2024-07-15 17:05:49.106496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.711 [2024-07-15 17:05:49.106508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.106984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.106992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.107011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.107030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.107049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.107069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.107088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.107108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.107128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.712 [2024-07-15 17:05:49.107612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.712 [2024-07-15 17:05:49.107634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.712 [2024-07-15 17:05:49.107655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.712 [2024-07-15 17:05:49.107676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.712 [2024-07-15 17:05:49.107695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.712 [2024-07-15 17:05:49.107716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.712 [2024-07-15 17:05:49.107729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.712 [2024-07-15 17:05:49.107736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.107749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.713 [2024-07-15 17:05:49.107756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.107769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.713 [2024-07-15 17:05:49.107777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.107789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.713 [2024-07-15 17:05:49.107797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.107811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.713 [2024-07-15 17:05:49.107817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.107830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.713 [2024-07-15 17:05:49.107837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.107853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.713 [2024-07-15 17:05:49.107860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.107872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.713 [2024-07-15 17:05:49.107880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.107893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.713 [2024-07-15 17:05:49.107900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.107913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.713 [2024-07-15 17:05:49.107921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.107933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.107941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.107955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.713 [2024-07-15 17:05:49.107962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.107975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.107983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.107996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.713 [2024-07-15 17:05:49.108563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.713 [2024-07-15 17:05:49.108570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.108983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.108990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.109987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.109995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.110008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.110018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.110031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.714 [2024-07-15 17:05:49.110038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.714 [2024-07-15 17:05:49.110051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.110842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.715 [2024-07-15 17:05:49.110849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.715 [2024-07-15 17:05:49.111263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.111276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.111299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.111632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.716 [2024-07-15 17:05:49.111651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.111673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.111695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.111715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.111736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.111960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.111982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.111995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.112002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.112015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.112022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.112036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.112043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.112056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.112063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.112076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.112084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.112097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.112104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.112117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.112125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.112141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.112148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.112161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.716 [2024-07-15 17:05:49.112168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.716 [2024-07-15 17:05:49.112180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.112979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.112987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.113000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.113007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.113020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.113027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.113040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.113047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.113064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.113075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.113090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.113097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.113111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.113118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.113131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.113138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.113151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.113158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.113171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.113178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.717 [2024-07-15 17:05:49.113190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.717 [2024-07-15 17:05:49.113202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.113984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.113998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.114302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.114310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.118701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.118714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.118728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.118735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.118748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.118756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.118769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.118776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.118790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.118797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.118810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.118818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.118830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.118837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.118849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.718 [2024-07-15 17:05:49.118856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.718 [2024-07-15 17:05:49.118868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.118875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.118889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.118897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.118909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.118917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.118932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.118940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.118953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.118960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.118973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.118980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.118993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.719 [2024-07-15 17:05:49.119911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.119987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.119995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.120008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.120015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.120028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.120036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.120051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.120059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.719 [2024-07-15 17:05:49.120071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.719 [2024-07-15 17:05:49.120081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.720 [2024-07-15 17:05:49.120798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.720 [2024-07-15 17:05:49.120810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.120818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.120830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.120837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.120850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.120862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.120875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.120882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.120896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.120904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.120917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.120924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.120937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.120944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.120957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.120965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.120977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.120985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.120998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.121329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.121339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.721 [2024-07-15 17:05:49.122355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.721 [2024-07-15 17:05:49.122362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.122887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.122894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.123268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.123289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.722 [2024-07-15 17:05:49.123310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.722 [2024-07-15 17:05:49.123330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.722 [2024-07-15 17:05:49.123350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.722 [2024-07-15 17:05:49.123371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.722 [2024-07-15 17:05:49.123392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.722 [2024-07-15 17:05:49.123412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.722 [2024-07-15 17:05:49.123432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.722 [2024-07-15 17:05:49.123453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.722 [2024-07-15 17:05:49.123477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.722 [2024-07-15 17:05:49.123498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.722 [2024-07-15 17:05:49.123518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.722 [2024-07-15 17:05:49.123531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.722 [2024-07-15 17:05:49.123538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.123551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.723 [2024-07-15 17:05:49.123558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.123571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.723 [2024-07-15 17:05:49.123579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.123591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.723 [2024-07-15 17:05:49.123600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.123613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.723 [2024-07-15 17:05:49.123620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.123634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.123641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.123654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.723 [2024-07-15 17:05:49.123661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.123673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.123680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.123693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.123700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.123716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.123723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.723 [2024-07-15 17:05:49.124802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.723 [2024-07-15 17:05:49.124816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.124823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.124836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.124843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.124856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.124864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.124876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.124884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.124896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.124904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.124916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.124923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.124937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.124945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.124958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.124965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.124978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.124985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.124998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.125983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.724 [2024-07-15 17:05:49.125996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.724 [2024-07-15 17:05:49.126004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.725 [2024-07-15 17:05:49.126740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.725 [2024-07-15 17:05:49.126747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.726 [2024-07-15 17:05:49.127547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.127986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.127994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.128006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.128013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.128027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.128035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.128047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.128055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.128068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.128075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.128089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.128096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.128111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.128119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.726 [2024-07-15 17:05:49.128132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.726 [2024-07-15 17:05:49.128139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.128987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.128994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.129007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.129014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.129027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.129034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.129046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.129054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.129066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.129074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.129086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.129094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.129108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.129115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.727 [2024-07-15 17:05:49.129128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.727 [2024-07-15 17:05:49.129137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.129981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.129988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.728 [2024-07-15 17:05:49.130336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.728 [2024-07-15 17:05:49.130344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.130365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.130385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.130405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.130424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.130444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.130464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.130484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.130504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.130524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.130548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.130923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.130946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.130967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.130979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.130987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.131300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.729 [2024-07-15 17:05:49.131321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.131341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.131363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.131383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.131584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.131605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.729 [2024-07-15 17:05:49.131625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.729 [2024-07-15 17:05:49.131637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.131657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.131677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.131698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.131718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.131738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.131758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.131778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.131797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.131816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.131838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.131859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.131879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.131899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.131906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.730 [2024-07-15 17:05:49.132577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.730 [2024-07-15 17:05:49.132590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.132985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.132998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.731 [2024-07-15 17:05:49.133858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.731 [2024-07-15 17:05:49.133865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.133877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.133885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.133897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.133904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.133918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.133925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.133938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.133945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.133957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.133964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.133976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.133984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.133997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.134004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.134017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.134025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.134038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.134045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.134058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.134065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.137625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.137635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.137649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.137657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.137670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.137677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.137690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.137698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.137713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.137720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.137733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.137741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.137754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.137761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.137773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.137780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.137793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.137801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.137814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.137821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.137834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.137841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.138223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.138251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.732 [2024-07-15 17:05:49.138272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.732 [2024-07-15 17:05:49.138571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.732 [2024-07-15 17:05:49.138580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.733 [2024-07-15 17:05:49.138620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.138982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.138995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.139015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.139036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.139056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.139078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.139098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.139117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.139138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.139157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.139177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.139198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.139217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.139244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.733 [2024-07-15 17:05:49.139265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.733 [2024-07-15 17:05:49.139272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.139958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.139966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.140175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.140186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.140214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.140223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.140246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.140254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.140272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.140280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.140299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.140307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.140327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.140335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.734 [2024-07-15 17:05:49.140353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.734 [2024-07-15 17:05:49.140361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.140980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.140987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.141006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.141014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.141032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.141039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.141057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.141065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.141083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.141090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.141109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.141116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.141134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.141141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.141159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.141166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.141184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.141191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.141210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.141217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.141240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.141249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:05:49.141341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:05:49.141350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:06:01.914695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:06:01.914735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:06:01.914766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:06:01.914775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:06:01.914789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:06:01.914797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:06:01.914810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.735 [2024-07-15 17:06:01.914817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.735 [2024-07-15 17:06:01.914829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.914836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.914849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.914856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.914868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.914876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.914888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.914895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.914908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.914914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.914927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.914934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.914947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.914954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.914966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.914973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.914990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.914996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.915332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.915345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.736 [2024-07-15 17:06:01.915352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.736 [2024-07-15 17:06:01.916426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.736 [2024-07-15 17:06:01.916434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.916447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.916453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.916468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.916475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.916487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.916495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.916508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.916514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.916530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.916538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.916551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.916559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.916572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.916579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.916592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.737 [2024-07-15 17:06:01.916599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.916612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.916621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.916634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.916641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.917394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.917407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.917422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.917429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.917442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.917449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.917462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.917469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.917481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.917488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.917501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.917507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.917519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.917529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.917541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.737 [2024-07-15 17:06:01.917548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.737 [2024-07-15 17:06:01.917561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.737 [2024-07-15 17:06:01.917567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.737 Received shutdown signal, test time was about 27.031084 seconds 00:23:57.737 00:23:57.737 Latency(us) 00:23:57.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.737 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:57.737 Verification LBA range: start 0x0 length 0x4000 00:23:57.737 Nvme0n1 : 27.03 10302.25 40.24 0.00 0.00 12403.74 163.84 3078254.41 00:23:57.737 =================================================================================================================== 00:23:57.737 Total : 10302.25 40.24 0.00 0.00 12403.74 163.84 3078254.41 00:23:57.737 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:57.995 rmmod nvme_tcp 00:23:57.995 rmmod nvme_fabrics 00:23:57.995 rmmod nvme_keyring 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 188457 ']' 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 188457 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 188457 ']' 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 188457 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 188457 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 188457' 00:23:57.995 killing process with pid 188457 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 188457 00:23:57.995 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 188457 00:23:58.253 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:58.253 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:58.253 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:58.253 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.253 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:58.253 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.253 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.253 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.155 17:06:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:00.155 00:24:00.155 real 0m39.292s 00:24:00.155 user 1m46.142s 00:24:00.155 sys 0m10.470s 00:24:00.155 17:06:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:00.155 17:06:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:00.155 ************************************ 00:24:00.155 END TEST nvmf_host_multipath_status 00:24:00.155 ************************************ 00:24:00.414 17:06:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:00.414 17:06:06 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:00.414 17:06:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:00.414 17:06:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:00.414 17:06:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:00.414 ************************************ 00:24:00.414 START TEST nvmf_discovery_remove_ifc 00:24:00.414 ************************************ 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:00.414 * Looking for test storage... 00:24:00.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.414 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.415 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.415 17:06:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:00.415 17:06:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:00.415 17:06:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:00.415 17:06:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:05.688 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:05.688 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:05.688 Found net devices under 0000:86:00.0: cvl_0_0 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:05.688 Found net devices under 0000:86:00.1: cvl_0_1 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.688 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:05.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:24:05.947 00:24:05.947 --- 10.0.0.2 ping statistics --- 00:24:05.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.947 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:24:05.947 00:24:05.947 --- 10.0.0.1 ping statistics --- 00:24:05.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.947 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=197634 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 197634 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 197634 ']' 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.947 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:05.947 [2024-07-15 17:06:12.478919] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:24:05.947 [2024-07-15 17:06:12.478984] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.947 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.947 [2024-07-15 17:06:12.537920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.947 [2024-07-15 17:06:12.615296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.947 [2024-07-15 17:06:12.615334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.947 [2024-07-15 17:06:12.615341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.947 [2024-07-15 17:06:12.615347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.947 [2024-07-15 17:06:12.615352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.947 [2024-07-15 17:06:12.615387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:06.883 [2024-07-15 17:06:13.322867] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.883 [2024-07-15 17:06:13.331024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:06.883 null0 00:24:06.883 [2024-07-15 17:06:13.363002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=197790 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 197790 /tmp/host.sock 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 197790 ']' 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:06.883 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:06.883 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:06.883 [2024-07-15 17:06:13.430058] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:24:06.883 [2024-07-15 17:06:13.430102] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197790 ] 00:24:06.883 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.883 [2024-07-15 17:06:13.481686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.140 [2024-07-15 17:06:13.566896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.707 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.642 [2024-07-15 17:06:15.310603] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:08.642 [2024-07-15 17:06:15.310625] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:08.642 [2024-07-15 17:06:15.310637] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:08.900 [2024-07-15 17:06:15.437032] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:08.900 [2024-07-15 17:06:15.493920] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:08.900 [2024-07-15 17:06:15.493967] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:08.900 [2024-07-15 17:06:15.493988] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:08.900 [2024-07-15 17:06:15.494001] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:08.900 [2024-07-15 17:06:15.494018] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:08.900 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.900 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:08.900 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:08.900 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.900 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:08.900 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:08.900 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.900 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:08.900 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.900 [2024-07-15 17:06:15.499848] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x227ce30 was disconnected and freed. delete nvme_qpair. 00:24:08.900 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.900 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:08.900 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:08.900 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:09.173 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:09.173 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:09.173 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.173 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:09.173 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:09.173 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.173 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:09.173 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.173 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.173 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:09.173 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:10.107 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:10.107 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.107 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:10.107 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:10.108 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.108 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:10.108 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.108 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.108 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:10.108 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:11.484 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.484 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.484 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.484 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.484 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.484 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.484 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.484 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.484 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:11.484 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:12.420 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:12.420 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.420 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:12.420 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:12.420 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.420 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.420 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:12.420 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.420 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:12.420 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:13.355 17:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.355 17:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.355 17:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.355 17:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.355 17:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.355 17:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.355 17:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.355 17:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.355 17:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:13.355 17:06:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:14.359 17:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:14.359 17:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:14.359 17:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.359 17:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.359 17:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:14.359 17:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.359 17:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:14.359 17:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.359 17:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:14.359 17:06:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:14.359 [2024-07-15 17:06:20.935235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:14.359 [2024-07-15 17:06:20.935271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.359 [2024-07-15 17:06:20.935283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.359 [2024-07-15 17:06:20.935293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.359 [2024-07-15 17:06:20.935300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.359 [2024-07-15 17:06:20.935308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.359 [2024-07-15 17:06:20.935315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.359 [2024-07-15 17:06:20.935322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.359 [2024-07-15 17:06:20.935329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.359 [2024-07-15 17:06:20.935336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.359 [2024-07-15 17:06:20.935342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.359 [2024-07-15 17:06:20.935349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243690 is same with the state(5) to be set 00:24:14.359 [2024-07-15 17:06:20.945251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2243690 (9): Bad file descriptor 00:24:14.359 [2024-07-15 17:06:20.955290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:15.295 17:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.295 17:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.295 17:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.295 17:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.295 17:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.295 17:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.295 17:06:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.553 [2024-07-15 17:06:21.986259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:15.553 [2024-07-15 17:06:21.986305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2243690 with addr=10.0.0.2, port=4420 00:24:15.553 [2024-07-15 17:06:21.986326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243690 is same with the state(5) to be set 00:24:15.553 [2024-07-15 17:06:21.986358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2243690 (9): Bad file descriptor 00:24:15.553 [2024-07-15 17:06:21.986772] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.553 [2024-07-15 17:06:21.986792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:15.553 [2024-07-15 17:06:21.986801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:15.553 [2024-07-15 17:06:21.986811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:15.553 [2024-07-15 17:06:21.986830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.553 [2024-07-15 17:06:21.986841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:15.553 17:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.553 17:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:15.553 17:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:16.487 [2024-07-15 17:06:22.989321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:16.487 [2024-07-15 17:06:22.989344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:16.487 [2024-07-15 17:06:22.989351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:16.487 [2024-07-15 17:06:22.989357] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:16.487 [2024-07-15 17:06:22.989369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.487 [2024-07-15 17:06:22.989386] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:16.487 [2024-07-15 17:06:22.989405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.487 [2024-07-15 17:06:22.989414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.487 [2024-07-15 17:06:22.989424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.487 [2024-07-15 17:06:22.989431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.487 [2024-07-15 17:06:22.989438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.487 [2024-07-15 17:06:22.989444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.487 [2024-07-15 17:06:22.989451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.487 [2024-07-15 17:06:22.989457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.487 [2024-07-15 17:06:22.989464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.487 [2024-07-15 17:06:22.989470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.487 [2024-07-15 17:06:22.989477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:16.487 [2024-07-15 17:06:22.989556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2242a80 (9): Bad file descriptor 00:24:16.487 [2024-07-15 17:06:22.990566] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:16.487 [2024-07-15 17:06:22.990577] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:16.487 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.744 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:16.744 17:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:17.680 17:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:17.680 17:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.680 17:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:17.680 17:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.680 17:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:17.680 17:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.680 17:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:17.680 17:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.680 17:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:17.680 17:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:18.616 [2024-07-15 17:06:25.046737] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:18.616 [2024-07-15 17:06:25.046757] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:18.616 [2024-07-15 17:06:25.046769] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:18.616 [2024-07-15 17:06:25.173158] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:18.616 17:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:18.616 17:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:18.616 17:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.616 17:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:18.616 17:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.616 17:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:18.616 17:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:18.616 17:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.616 [2024-07-15 17:06:25.269438] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:18.616 [2024-07-15 17:06:25.269473] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:18.616 [2024-07-15 17:06:25.269492] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:18.616 [2024-07-15 17:06:25.269506] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:18.616 [2024-07-15 17:06:25.269512] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:18.616 [2024-07-15 17:06:25.275243] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22598d0 was disconnected and freed. delete nvme_qpair. 00:24:18.875 17:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:18.875 17:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:19.811 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:19.811 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.811 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:19.811 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.811 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 197790 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 197790 ']' 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 197790 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 197790 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 197790' 00:24:19.812 killing process with pid 197790 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 197790 00:24:19.812 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 197790 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:20.071 rmmod nvme_tcp 00:24:20.071 rmmod nvme_fabrics 00:24:20.071 rmmod nvme_keyring 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 197634 ']' 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 197634 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 197634 ']' 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 197634 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 197634 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 197634' 00:24:20.071 killing process with pid 197634 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 197634 00:24:20.071 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 197634 00:24:20.331 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:20.331 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:20.331 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:20.331 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:20.331 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:20.331 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.331 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.331 17:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.870 17:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:22.870 00:24:22.870 real 0m22.046s 00:24:22.870 user 0m28.591s 00:24:22.870 sys 0m5.368s 00:24:22.870 17:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:22.870 17:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:22.870 ************************************ 00:24:22.870 END TEST nvmf_discovery_remove_ifc 00:24:22.870 ************************************ 00:24:22.870 17:06:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:22.870 17:06:28 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:22.870 17:06:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:22.870 17:06:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:22.870 17:06:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:22.870 ************************************ 00:24:22.870 START TEST nvmf_identify_kernel_target 00:24:22.870 ************************************ 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:22.870 * Looking for test storage... 00:24:22.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:22.870 17:06:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:28.145 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:28.145 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:28.145 Found net devices under 0000:86:00.0: cvl_0_0 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:28.145 Found net devices under 0000:86:00.1: cvl_0_1 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:28.145 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.146 17:06:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:28.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:24:28.146 00:24:28.146 --- 10.0.0.2 ping statistics --- 00:24:28.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.146 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:24:28.146 00:24:28.146 --- 10.0.0.1 ping statistics --- 00:24:28.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.146 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:28.146 17:06:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:30.683 Waiting for block devices as requested 00:24:30.683 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:30.683 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:30.683 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:30.683 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:30.683 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:30.683 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:30.942 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:30.942 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:30.942 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:30.942 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:31.202 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:31.202 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:31.202 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:31.461 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:31.461 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:31.461 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:31.461 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:31.719 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:31.719 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:31.719 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:31.719 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:31.719 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:31.719 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:31.719 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:31.719 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:31.720 No valid GPT data, bailing 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:31.720 00:24:31.720 Discovery Log Number of Records 2, Generation counter 2 00:24:31.720 =====Discovery Log Entry 0====== 00:24:31.720 trtype: tcp 00:24:31.720 adrfam: ipv4 00:24:31.720 subtype: current discovery subsystem 00:24:31.720 treq: not specified, sq flow control disable supported 00:24:31.720 portid: 1 00:24:31.720 trsvcid: 4420 00:24:31.720 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:31.720 traddr: 10.0.0.1 00:24:31.720 eflags: none 00:24:31.720 sectype: none 00:24:31.720 =====Discovery Log Entry 1====== 00:24:31.720 trtype: tcp 00:24:31.720 adrfam: ipv4 00:24:31.720 subtype: nvme subsystem 00:24:31.720 treq: not specified, sq flow control disable supported 00:24:31.720 portid: 1 00:24:31.720 trsvcid: 4420 00:24:31.720 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:31.720 traddr: 10.0.0.1 00:24:31.720 eflags: none 00:24:31.720 sectype: none 00:24:31.720 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:31.720 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:31.720 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.980 ===================================================== 00:24:31.980 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:31.980 ===================================================== 00:24:31.980 Controller Capabilities/Features 00:24:31.980 ================================ 00:24:31.980 Vendor ID: 0000 00:24:31.980 Subsystem Vendor ID: 0000 00:24:31.980 Serial Number: 96ad4a51a84ed2df9cab 00:24:31.980 Model Number: Linux 00:24:31.980 Firmware Version: 6.7.0-68 00:24:31.980 Recommended Arb Burst: 0 00:24:31.980 IEEE OUI Identifier: 00 00 00 00:24:31.980 Multi-path I/O 00:24:31.980 May have multiple subsystem ports: No 00:24:31.980 May have multiple controllers: No 00:24:31.980 Associated with SR-IOV VF: No 00:24:31.980 Max Data Transfer Size: Unlimited 00:24:31.980 Max Number of Namespaces: 0 00:24:31.980 Max Number of I/O Queues: 1024 00:24:31.980 NVMe Specification Version (VS): 1.3 00:24:31.980 NVMe Specification Version (Identify): 1.3 00:24:31.980 Maximum Queue Entries: 1024 00:24:31.980 Contiguous Queues Required: No 00:24:31.980 Arbitration Mechanisms Supported 00:24:31.980 Weighted Round Robin: Not Supported 00:24:31.980 Vendor Specific: Not Supported 00:24:31.980 Reset Timeout: 7500 ms 00:24:31.980 Doorbell Stride: 4 bytes 00:24:31.980 NVM Subsystem Reset: Not Supported 00:24:31.980 Command Sets Supported 00:24:31.980 NVM Command Set: Supported 00:24:31.980 Boot Partition: Not Supported 00:24:31.980 Memory Page Size Minimum: 4096 bytes 00:24:31.980 Memory Page Size Maximum: 4096 bytes 00:24:31.980 Persistent Memory Region: Not Supported 00:24:31.980 Optional Asynchronous Events Supported 00:24:31.980 Namespace Attribute Notices: Not Supported 00:24:31.980 Firmware Activation Notices: Not Supported 00:24:31.980 ANA Change Notices: Not Supported 00:24:31.980 PLE Aggregate Log Change Notices: Not Supported 00:24:31.980 LBA Status Info Alert Notices: Not Supported 00:24:31.980 EGE Aggregate Log Change Notices: Not Supported 00:24:31.980 Normal NVM Subsystem Shutdown event: Not Supported 00:24:31.980 Zone Descriptor Change Notices: Not Supported 00:24:31.980 Discovery Log Change Notices: Supported 00:24:31.980 Controller Attributes 00:24:31.980 128-bit Host Identifier: Not Supported 00:24:31.980 Non-Operational Permissive Mode: Not Supported 00:24:31.980 NVM Sets: Not Supported 00:24:31.980 Read Recovery Levels: Not Supported 00:24:31.980 Endurance Groups: Not Supported 00:24:31.980 Predictable Latency Mode: Not Supported 00:24:31.980 Traffic Based Keep ALive: Not Supported 00:24:31.980 Namespace Granularity: Not Supported 00:24:31.980 SQ Associations: Not Supported 00:24:31.980 UUID List: Not Supported 00:24:31.980 Multi-Domain Subsystem: Not Supported 00:24:31.980 Fixed Capacity Management: Not Supported 00:24:31.980 Variable Capacity Management: Not Supported 00:24:31.980 Delete Endurance Group: Not Supported 00:24:31.980 Delete NVM Set: Not Supported 00:24:31.980 Extended LBA Formats Supported: Not Supported 00:24:31.980 Flexible Data Placement Supported: Not Supported 00:24:31.980 00:24:31.980 Controller Memory Buffer Support 00:24:31.980 ================================ 00:24:31.980 Supported: No 00:24:31.980 00:24:31.980 Persistent Memory Region Support 00:24:31.980 ================================ 00:24:31.980 Supported: No 00:24:31.980 00:24:31.980 Admin Command Set Attributes 00:24:31.980 ============================ 00:24:31.980 Security Send/Receive: Not Supported 00:24:31.980 Format NVM: Not Supported 00:24:31.980 Firmware Activate/Download: Not Supported 00:24:31.980 Namespace Management: Not Supported 00:24:31.980 Device Self-Test: Not Supported 00:24:31.980 Directives: Not Supported 00:24:31.980 NVMe-MI: Not Supported 00:24:31.980 Virtualization Management: Not Supported 00:24:31.980 Doorbell Buffer Config: Not Supported 00:24:31.980 Get LBA Status Capability: Not Supported 00:24:31.980 Command & Feature Lockdown Capability: Not Supported 00:24:31.980 Abort Command Limit: 1 00:24:31.980 Async Event Request Limit: 1 00:24:31.980 Number of Firmware Slots: N/A 00:24:31.980 Firmware Slot 1 Read-Only: N/A 00:24:31.980 Firmware Activation Without Reset: N/A 00:24:31.980 Multiple Update Detection Support: N/A 00:24:31.980 Firmware Update Granularity: No Information Provided 00:24:31.980 Per-Namespace SMART Log: No 00:24:31.980 Asymmetric Namespace Access Log Page: Not Supported 00:24:31.980 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:31.980 Command Effects Log Page: Not Supported 00:24:31.980 Get Log Page Extended Data: Supported 00:24:31.980 Telemetry Log Pages: Not Supported 00:24:31.980 Persistent Event Log Pages: Not Supported 00:24:31.980 Supported Log Pages Log Page: May Support 00:24:31.980 Commands Supported & Effects Log Page: Not Supported 00:24:31.980 Feature Identifiers & Effects Log Page:May Support 00:24:31.980 NVMe-MI Commands & Effects Log Page: May Support 00:24:31.980 Data Area 4 for Telemetry Log: Not Supported 00:24:31.980 Error Log Page Entries Supported: 1 00:24:31.980 Keep Alive: Not Supported 00:24:31.980 00:24:31.980 NVM Command Set Attributes 00:24:31.980 ========================== 00:24:31.980 Submission Queue Entry Size 00:24:31.980 Max: 1 00:24:31.980 Min: 1 00:24:31.980 Completion Queue Entry Size 00:24:31.980 Max: 1 00:24:31.980 Min: 1 00:24:31.980 Number of Namespaces: 0 00:24:31.980 Compare Command: Not Supported 00:24:31.980 Write Uncorrectable Command: Not Supported 00:24:31.980 Dataset Management Command: Not Supported 00:24:31.980 Write Zeroes Command: Not Supported 00:24:31.980 Set Features Save Field: Not Supported 00:24:31.980 Reservations: Not Supported 00:24:31.980 Timestamp: Not Supported 00:24:31.980 Copy: Not Supported 00:24:31.980 Volatile Write Cache: Not Present 00:24:31.980 Atomic Write Unit (Normal): 1 00:24:31.980 Atomic Write Unit (PFail): 1 00:24:31.981 Atomic Compare & Write Unit: 1 00:24:31.981 Fused Compare & Write: Not Supported 00:24:31.981 Scatter-Gather List 00:24:31.981 SGL Command Set: Supported 00:24:31.981 SGL Keyed: Not Supported 00:24:31.981 SGL Bit Bucket Descriptor: Not Supported 00:24:31.981 SGL Metadata Pointer: Not Supported 00:24:31.981 Oversized SGL: Not Supported 00:24:31.981 SGL Metadata Address: Not Supported 00:24:31.981 SGL Offset: Supported 00:24:31.981 Transport SGL Data Block: Not Supported 00:24:31.981 Replay Protected Memory Block: Not Supported 00:24:31.981 00:24:31.981 Firmware Slot Information 00:24:31.981 ========================= 00:24:31.981 Active slot: 0 00:24:31.981 00:24:31.981 00:24:31.981 Error Log 00:24:31.981 ========= 00:24:31.981 00:24:31.981 Active Namespaces 00:24:31.981 ================= 00:24:31.981 Discovery Log Page 00:24:31.981 ================== 00:24:31.981 Generation Counter: 2 00:24:31.981 Number of Records: 2 00:24:31.981 Record Format: 0 00:24:31.981 00:24:31.981 Discovery Log Entry 0 00:24:31.981 ---------------------- 00:24:31.981 Transport Type: 3 (TCP) 00:24:31.981 Address Family: 1 (IPv4) 00:24:31.981 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:31.981 Entry Flags: 00:24:31.981 Duplicate Returned Information: 0 00:24:31.981 Explicit Persistent Connection Support for Discovery: 0 00:24:31.981 Transport Requirements: 00:24:31.981 Secure Channel: Not Specified 00:24:31.981 Port ID: 1 (0x0001) 00:24:31.981 Controller ID: 65535 (0xffff) 00:24:31.981 Admin Max SQ Size: 32 00:24:31.981 Transport Service Identifier: 4420 00:24:31.981 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:31.981 Transport Address: 10.0.0.1 00:24:31.981 Discovery Log Entry 1 00:24:31.981 ---------------------- 00:24:31.981 Transport Type: 3 (TCP) 00:24:31.981 Address Family: 1 (IPv4) 00:24:31.981 Subsystem Type: 2 (NVM Subsystem) 00:24:31.981 Entry Flags: 00:24:31.981 Duplicate Returned Information: 0 00:24:31.981 Explicit Persistent Connection Support for Discovery: 0 00:24:31.981 Transport Requirements: 00:24:31.981 Secure Channel: Not Specified 00:24:31.981 Port ID: 1 (0x0001) 00:24:31.981 Controller ID: 65535 (0xffff) 00:24:31.981 Admin Max SQ Size: 32 00:24:31.981 Transport Service Identifier: 4420 00:24:31.981 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:31.981 Transport Address: 10.0.0.1 00:24:31.981 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:31.981 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.981 get_feature(0x01) failed 00:24:31.981 get_feature(0x02) failed 00:24:31.981 get_feature(0x04) failed 00:24:31.981 ===================================================== 00:24:31.981 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:31.981 ===================================================== 00:24:31.981 Controller Capabilities/Features 00:24:31.981 ================================ 00:24:31.981 Vendor ID: 0000 00:24:31.981 Subsystem Vendor ID: 0000 00:24:31.981 Serial Number: f53d9bb1065a77219553 00:24:31.981 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:31.981 Firmware Version: 6.7.0-68 00:24:31.981 Recommended Arb Burst: 6 00:24:31.981 IEEE OUI Identifier: 00 00 00 00:24:31.981 Multi-path I/O 00:24:31.981 May have multiple subsystem ports: Yes 00:24:31.981 May have multiple controllers: Yes 00:24:31.981 Associated with SR-IOV VF: No 00:24:31.981 Max Data Transfer Size: Unlimited 00:24:31.981 Max Number of Namespaces: 1024 00:24:31.981 Max Number of I/O Queues: 128 00:24:31.981 NVMe Specification Version (VS): 1.3 00:24:31.981 NVMe Specification Version (Identify): 1.3 00:24:31.981 Maximum Queue Entries: 1024 00:24:31.981 Contiguous Queues Required: No 00:24:31.981 Arbitration Mechanisms Supported 00:24:31.981 Weighted Round Robin: Not Supported 00:24:31.981 Vendor Specific: Not Supported 00:24:31.981 Reset Timeout: 7500 ms 00:24:31.981 Doorbell Stride: 4 bytes 00:24:31.981 NVM Subsystem Reset: Not Supported 00:24:31.981 Command Sets Supported 00:24:31.981 NVM Command Set: Supported 00:24:31.981 Boot Partition: Not Supported 00:24:31.981 Memory Page Size Minimum: 4096 bytes 00:24:31.981 Memory Page Size Maximum: 4096 bytes 00:24:31.981 Persistent Memory Region: Not Supported 00:24:31.981 Optional Asynchronous Events Supported 00:24:31.981 Namespace Attribute Notices: Supported 00:24:31.981 Firmware Activation Notices: Not Supported 00:24:31.981 ANA Change Notices: Supported 00:24:31.981 PLE Aggregate Log Change Notices: Not Supported 00:24:31.981 LBA Status Info Alert Notices: Not Supported 00:24:31.981 EGE Aggregate Log Change Notices: Not Supported 00:24:31.981 Normal NVM Subsystem Shutdown event: Not Supported 00:24:31.981 Zone Descriptor Change Notices: Not Supported 00:24:31.981 Discovery Log Change Notices: Not Supported 00:24:31.981 Controller Attributes 00:24:31.981 128-bit Host Identifier: Supported 00:24:31.981 Non-Operational Permissive Mode: Not Supported 00:24:31.981 NVM Sets: Not Supported 00:24:31.981 Read Recovery Levels: Not Supported 00:24:31.981 Endurance Groups: Not Supported 00:24:31.981 Predictable Latency Mode: Not Supported 00:24:31.981 Traffic Based Keep ALive: Supported 00:24:31.981 Namespace Granularity: Not Supported 00:24:31.981 SQ Associations: Not Supported 00:24:31.981 UUID List: Not Supported 00:24:31.981 Multi-Domain Subsystem: Not Supported 00:24:31.981 Fixed Capacity Management: Not Supported 00:24:31.981 Variable Capacity Management: Not Supported 00:24:31.981 Delete Endurance Group: Not Supported 00:24:31.981 Delete NVM Set: Not Supported 00:24:31.981 Extended LBA Formats Supported: Not Supported 00:24:31.981 Flexible Data Placement Supported: Not Supported 00:24:31.981 00:24:31.981 Controller Memory Buffer Support 00:24:31.981 ================================ 00:24:31.981 Supported: No 00:24:31.981 00:24:31.981 Persistent Memory Region Support 00:24:31.981 ================================ 00:24:31.981 Supported: No 00:24:31.981 00:24:31.981 Admin Command Set Attributes 00:24:31.981 ============================ 00:24:31.981 Security Send/Receive: Not Supported 00:24:31.981 Format NVM: Not Supported 00:24:31.981 Firmware Activate/Download: Not Supported 00:24:31.981 Namespace Management: Not Supported 00:24:31.981 Device Self-Test: Not Supported 00:24:31.981 Directives: Not Supported 00:24:31.981 NVMe-MI: Not Supported 00:24:31.981 Virtualization Management: Not Supported 00:24:31.981 Doorbell Buffer Config: Not Supported 00:24:31.981 Get LBA Status Capability: Not Supported 00:24:31.981 Command & Feature Lockdown Capability: Not Supported 00:24:31.981 Abort Command Limit: 4 00:24:31.981 Async Event Request Limit: 4 00:24:31.981 Number of Firmware Slots: N/A 00:24:31.981 Firmware Slot 1 Read-Only: N/A 00:24:31.981 Firmware Activation Without Reset: N/A 00:24:31.981 Multiple Update Detection Support: N/A 00:24:31.981 Firmware Update Granularity: No Information Provided 00:24:31.981 Per-Namespace SMART Log: Yes 00:24:31.981 Asymmetric Namespace Access Log Page: Supported 00:24:31.981 ANA Transition Time : 10 sec 00:24:31.981 00:24:31.981 Asymmetric Namespace Access Capabilities 00:24:31.981 ANA Optimized State : Supported 00:24:31.981 ANA Non-Optimized State : Supported 00:24:31.981 ANA Inaccessible State : Supported 00:24:31.981 ANA Persistent Loss State : Supported 00:24:31.981 ANA Change State : Supported 00:24:31.981 ANAGRPID is not changed : No 00:24:31.981 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:31.981 00:24:31.981 ANA Group Identifier Maximum : 128 00:24:31.981 Number of ANA Group Identifiers : 128 00:24:31.981 Max Number of Allowed Namespaces : 1024 00:24:31.981 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:31.981 Command Effects Log Page: Supported 00:24:31.981 Get Log Page Extended Data: Supported 00:24:31.981 Telemetry Log Pages: Not Supported 00:24:31.981 Persistent Event Log Pages: Not Supported 00:24:31.981 Supported Log Pages Log Page: May Support 00:24:31.981 Commands Supported & Effects Log Page: Not Supported 00:24:31.981 Feature Identifiers & Effects Log Page:May Support 00:24:31.981 NVMe-MI Commands & Effects Log Page: May Support 00:24:31.981 Data Area 4 for Telemetry Log: Not Supported 00:24:31.981 Error Log Page Entries Supported: 128 00:24:31.981 Keep Alive: Supported 00:24:31.981 Keep Alive Granularity: 1000 ms 00:24:31.981 00:24:31.981 NVM Command Set Attributes 00:24:31.981 ========================== 00:24:31.981 Submission Queue Entry Size 00:24:31.981 Max: 64 00:24:31.981 Min: 64 00:24:31.981 Completion Queue Entry Size 00:24:31.981 Max: 16 00:24:31.981 Min: 16 00:24:31.981 Number of Namespaces: 1024 00:24:31.981 Compare Command: Not Supported 00:24:31.981 Write Uncorrectable Command: Not Supported 00:24:31.981 Dataset Management Command: Supported 00:24:31.981 Write Zeroes Command: Supported 00:24:31.981 Set Features Save Field: Not Supported 00:24:31.981 Reservations: Not Supported 00:24:31.981 Timestamp: Not Supported 00:24:31.981 Copy: Not Supported 00:24:31.981 Volatile Write Cache: Present 00:24:31.981 Atomic Write Unit (Normal): 1 00:24:31.981 Atomic Write Unit (PFail): 1 00:24:31.981 Atomic Compare & Write Unit: 1 00:24:31.981 Fused Compare & Write: Not Supported 00:24:31.981 Scatter-Gather List 00:24:31.982 SGL Command Set: Supported 00:24:31.982 SGL Keyed: Not Supported 00:24:31.982 SGL Bit Bucket Descriptor: Not Supported 00:24:31.982 SGL Metadata Pointer: Not Supported 00:24:31.982 Oversized SGL: Not Supported 00:24:31.982 SGL Metadata Address: Not Supported 00:24:31.982 SGL Offset: Supported 00:24:31.982 Transport SGL Data Block: Not Supported 00:24:31.982 Replay Protected Memory Block: Not Supported 00:24:31.982 00:24:31.982 Firmware Slot Information 00:24:31.982 ========================= 00:24:31.982 Active slot: 0 00:24:31.982 00:24:31.982 Asymmetric Namespace Access 00:24:31.982 =========================== 00:24:31.982 Change Count : 0 00:24:31.982 Number of ANA Group Descriptors : 1 00:24:31.982 ANA Group Descriptor : 0 00:24:31.982 ANA Group ID : 1 00:24:31.982 Number of NSID Values : 1 00:24:31.982 Change Count : 0 00:24:31.982 ANA State : 1 00:24:31.982 Namespace Identifier : 1 00:24:31.982 00:24:31.982 Commands Supported and Effects 00:24:31.982 ============================== 00:24:31.982 Admin Commands 00:24:31.982 -------------- 00:24:31.982 Get Log Page (02h): Supported 00:24:31.982 Identify (06h): Supported 00:24:31.982 Abort (08h): Supported 00:24:31.982 Set Features (09h): Supported 00:24:31.982 Get Features (0Ah): Supported 00:24:31.982 Asynchronous Event Request (0Ch): Supported 00:24:31.982 Keep Alive (18h): Supported 00:24:31.982 I/O Commands 00:24:31.982 ------------ 00:24:31.982 Flush (00h): Supported 00:24:31.982 Write (01h): Supported LBA-Change 00:24:31.982 Read (02h): Supported 00:24:31.982 Write Zeroes (08h): Supported LBA-Change 00:24:31.982 Dataset Management (09h): Supported 00:24:31.982 00:24:31.982 Error Log 00:24:31.982 ========= 00:24:31.982 Entry: 0 00:24:31.982 Error Count: 0x3 00:24:31.982 Submission Queue Id: 0x0 00:24:31.982 Command Id: 0x5 00:24:31.982 Phase Bit: 0 00:24:31.982 Status Code: 0x2 00:24:31.982 Status Code Type: 0x0 00:24:31.982 Do Not Retry: 1 00:24:31.982 Error Location: 0x28 00:24:31.982 LBA: 0x0 00:24:31.982 Namespace: 0x0 00:24:31.982 Vendor Log Page: 0x0 00:24:31.982 ----------- 00:24:31.982 Entry: 1 00:24:31.982 Error Count: 0x2 00:24:31.982 Submission Queue Id: 0x0 00:24:31.982 Command Id: 0x5 00:24:31.982 Phase Bit: 0 00:24:31.982 Status Code: 0x2 00:24:31.982 Status Code Type: 0x0 00:24:31.982 Do Not Retry: 1 00:24:31.982 Error Location: 0x28 00:24:31.982 LBA: 0x0 00:24:31.982 Namespace: 0x0 00:24:31.982 Vendor Log Page: 0x0 00:24:31.982 ----------- 00:24:31.982 Entry: 2 00:24:31.982 Error Count: 0x1 00:24:31.982 Submission Queue Id: 0x0 00:24:31.982 Command Id: 0x4 00:24:31.982 Phase Bit: 0 00:24:31.982 Status Code: 0x2 00:24:31.982 Status Code Type: 0x0 00:24:31.982 Do Not Retry: 1 00:24:31.982 Error Location: 0x28 00:24:31.982 LBA: 0x0 00:24:31.982 Namespace: 0x0 00:24:31.982 Vendor Log Page: 0x0 00:24:31.982 00:24:31.982 Number of Queues 00:24:31.982 ================ 00:24:31.982 Number of I/O Submission Queues: 128 00:24:31.982 Number of I/O Completion Queues: 128 00:24:31.982 00:24:31.982 ZNS Specific Controller Data 00:24:31.982 ============================ 00:24:31.982 Zone Append Size Limit: 0 00:24:31.982 00:24:31.982 00:24:31.982 Active Namespaces 00:24:31.982 ================= 00:24:31.982 get_feature(0x05) failed 00:24:31.982 Namespace ID:1 00:24:31.982 Command Set Identifier: NVM (00h) 00:24:31.982 Deallocate: Supported 00:24:31.982 Deallocated/Unwritten Error: Not Supported 00:24:31.982 Deallocated Read Value: Unknown 00:24:31.982 Deallocate in Write Zeroes: Not Supported 00:24:31.982 Deallocated Guard Field: 0xFFFF 00:24:31.982 Flush: Supported 00:24:31.982 Reservation: Not Supported 00:24:31.982 Namespace Sharing Capabilities: Multiple Controllers 00:24:31.982 Size (in LBAs): 1953525168 (931GiB) 00:24:31.982 Capacity (in LBAs): 1953525168 (931GiB) 00:24:31.982 Utilization (in LBAs): 1953525168 (931GiB) 00:24:31.982 UUID: 659f3ad0-4420-4f56-95e0-c67cea93a3dd 00:24:31.982 Thin Provisioning: Not Supported 00:24:31.982 Per-NS Atomic Units: Yes 00:24:31.982 Atomic Boundary Size (Normal): 0 00:24:31.982 Atomic Boundary Size (PFail): 0 00:24:31.982 Atomic Boundary Offset: 0 00:24:31.982 NGUID/EUI64 Never Reused: No 00:24:31.982 ANA group ID: 1 00:24:31.982 Namespace Write Protected: No 00:24:31.982 Number of LBA Formats: 1 00:24:31.982 Current LBA Format: LBA Format #00 00:24:31.982 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:31.982 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.982 rmmod nvme_tcp 00:24:31.982 rmmod nvme_fabrics 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.982 17:06:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.537 17:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:34.537 17:06:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:34.537 17:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:34.537 17:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:34.537 17:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:34.537 17:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:34.537 17:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:34.537 17:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:34.537 17:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:34.537 17:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:34.537 17:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:36.468 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:36.468 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:36.468 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:36.468 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:36.468 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:36.468 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:36.468 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:36.468 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:36.468 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:36.727 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:36.727 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:36.727 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:36.727 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:36.727 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:36.727 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:36.727 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:37.662 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:37.662 00:24:37.662 real 0m15.096s 00:24:37.662 user 0m3.600s 00:24:37.662 sys 0m7.790s 00:24:37.662 17:06:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:37.662 17:06:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.662 ************************************ 00:24:37.662 END TEST nvmf_identify_kernel_target 00:24:37.662 ************************************ 00:24:37.662 17:06:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:37.662 17:06:44 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:37.662 17:06:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:37.662 17:06:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:37.662 17:06:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.662 ************************************ 00:24:37.662 START TEST nvmf_auth_host 00:24:37.662 ************************************ 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:37.662 * Looking for test storage... 00:24:37.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.662 17:06:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:37.663 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:42.937 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:42.937 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:42.937 Found net devices under 0000:86:00.0: cvl_0_0 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:42.937 Found net devices under 0000:86:00.1: cvl_0_1 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.937 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.196 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:43.196 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.196 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.196 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.196 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:43.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:24:43.196 00:24:43.196 --- 10.0.0.2 ping statistics --- 00:24:43.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.196 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:24:43.196 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:24:43.196 00:24:43.196 --- 10.0.0.1 ping statistics --- 00:24:43.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.196 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:43.196 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.196 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:43.196 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:43.196 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.196 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:43.196 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:43.196 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=209645 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 209645 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 209645 ']' 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.197 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b4b0e54d760f35096405bb7c09709245 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.2wc 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b4b0e54d760f35096405bb7c09709245 0 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b4b0e54d760f35096405bb7c09709245 0 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b4b0e54d760f35096405bb7c09709245 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.2wc 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.2wc 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.2wc 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f5099c058a515725aba76ebee186147c0bd32d8a449f8d9a71a4fdd2d8ffb334 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.3GW 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f5099c058a515725aba76ebee186147c0bd32d8a449f8d9a71a4fdd2d8ffb334 3 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f5099c058a515725aba76ebee186147c0bd32d8a449f8d9a71a4fdd2d8ffb334 3 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f5099c058a515725aba76ebee186147c0bd32d8a449f8d9a71a4fdd2d8ffb334 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.3GW 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.3GW 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.3GW 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:44.133 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=933c72696e047f00b79123285e3d456f6bbe5d541c8aa4ee 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.IwU 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 933c72696e047f00b79123285e3d456f6bbe5d541c8aa4ee 0 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 933c72696e047f00b79123285e3d456f6bbe5d541c8aa4ee 0 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=933c72696e047f00b79123285e3d456f6bbe5d541c8aa4ee 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.IwU 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.IwU 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.IwU 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:44.134 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8e82fa1ac2c9df5f4593a42814eb1a551fb452579f02fa50 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.70Z 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8e82fa1ac2c9df5f4593a42814eb1a551fb452579f02fa50 2 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8e82fa1ac2c9df5f4593a42814eb1a551fb452579f02fa50 2 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8e82fa1ac2c9df5f4593a42814eb1a551fb452579f02fa50 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.70Z 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.70Z 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.70Z 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0246351ed8bceda000292cc729bf005b 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LLc 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0246351ed8bceda000292cc729bf005b 1 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0246351ed8bceda000292cc729bf005b 1 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0246351ed8bceda000292cc729bf005b 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LLc 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LLc 00:24:44.393 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.LLc 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6806cec2cc125f1663b37d2720ac6cff 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MQp 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6806cec2cc125f1663b37d2720ac6cff 1 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6806cec2cc125f1663b37d2720ac6cff 1 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6806cec2cc125f1663b37d2720ac6cff 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MQp 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MQp 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.MQp 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fe359906675c61dc8818d6cd014834ec27db5360e5cc6e2a 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.e2f 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fe359906675c61dc8818d6cd014834ec27db5360e5cc6e2a 2 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fe359906675c61dc8818d6cd014834ec27db5360e5cc6e2a 2 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fe359906675c61dc8818d6cd014834ec27db5360e5cc6e2a 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:44.394 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.e2f 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.e2f 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.e2f 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cfc2a02707442b8daf2ec0ef03c78bec 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.YNn 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cfc2a02707442b8daf2ec0ef03c78bec 0 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cfc2a02707442b8daf2ec0ef03c78bec 0 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cfc2a02707442b8daf2ec0ef03c78bec 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:44.394 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.YNn 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.YNn 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.YNn 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7953b5f61dd8c24caaf6e6a1ab347005e061e54a6537f414bb9675ce43a5013c 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DQ1 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7953b5f61dd8c24caaf6e6a1ab347005e061e54a6537f414bb9675ce43a5013c 3 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7953b5f61dd8c24caaf6e6a1ab347005e061e54a6537f414bb9675ce43a5013c 3 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7953b5f61dd8c24caaf6e6a1ab347005e061e54a6537f414bb9675ce43a5013c 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DQ1 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DQ1 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.DQ1 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 209645 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 209645 ']' 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.654 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2wc 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.3GW ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3GW 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.IwU 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.70Z ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.70Z 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.LLc 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.MQp ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MQp 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.e2f 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.YNn ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.YNn 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.DQ1 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:44.914 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:44.915 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:44.915 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:44.915 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:44.915 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:47.451 Waiting for block devices as requested 00:24:47.451 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:47.710 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:47.710 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:47.710 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:47.710 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:47.969 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:47.969 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:47.969 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:48.228 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:48.229 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:48.229 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:48.229 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:48.488 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:48.488 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:48.488 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:48.488 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:48.746 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:49.312 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:49.313 No valid GPT data, bailing 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:49.313 00:24:49.313 Discovery Log Number of Records 2, Generation counter 2 00:24:49.313 =====Discovery Log Entry 0====== 00:24:49.313 trtype: tcp 00:24:49.313 adrfam: ipv4 00:24:49.313 subtype: current discovery subsystem 00:24:49.313 treq: not specified, sq flow control disable supported 00:24:49.313 portid: 1 00:24:49.313 trsvcid: 4420 00:24:49.313 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:49.313 traddr: 10.0.0.1 00:24:49.313 eflags: none 00:24:49.313 sectype: none 00:24:49.313 =====Discovery Log Entry 1====== 00:24:49.313 trtype: tcp 00:24:49.313 adrfam: ipv4 00:24:49.313 subtype: nvme subsystem 00:24:49.313 treq: not specified, sq flow control disable supported 00:24:49.313 portid: 1 00:24:49.313 trsvcid: 4420 00:24:49.313 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:49.313 traddr: 10.0.0.1 00:24:49.313 eflags: none 00:24:49.313 sectype: none 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.313 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.571 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.571 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.571 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.571 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:49.571 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.571 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.571 nvme0n1 00:24:49.571 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.571 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.571 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.571 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.572 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.830 nvme0n1 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.830 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.089 nvme0n1 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.089 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.347 nvme0n1 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.348 nvme0n1 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.348 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:50.606 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.607 nvme0n1 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.607 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.867 nvme0n1 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.867 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.126 nvme0n1 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.126 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.127 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.385 nvme0n1 00:24:51.385 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.385 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.385 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.385 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.386 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.386 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.386 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.386 17:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.386 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.386 17:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.386 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.644 nvme0n1 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.645 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.904 nvme0n1 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.904 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.163 nvme0n1 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:52.163 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.164 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.422 17:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.422 nvme0n1 00:24:52.422 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.691 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.950 nvme0n1 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.950 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.210 nvme0n1 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.210 17:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.470 nvme0n1 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.470 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.759 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.018 nvme0n1 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.018 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.585 nvme0n1 00:24:54.585 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.585 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.585 17:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.585 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.585 17:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.585 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.843 nvme0n1 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:54.843 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.844 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.102 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.102 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.102 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.102 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.102 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.102 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.102 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.102 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.102 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.102 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.102 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.102 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.103 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:55.103 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.103 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.362 nvme0n1 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.362 17:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.930 nvme0n1 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.930 17:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.498 nvme0n1 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.498 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.064 nvme0n1 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.064 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.322 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.322 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.322 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.322 17:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.322 17:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:57.322 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.322 17:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.886 nvme0n1 00:24:57.886 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.886 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.886 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.886 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.886 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.886 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.886 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.886 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.886 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.886 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.887 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.452 nvme0n1 00:24:58.452 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.452 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.452 17:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.452 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.452 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.452 17:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.452 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.019 nvme0n1 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.019 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.278 nvme0n1 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.278 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.279 17:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.536 nvme0n1 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.536 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.537 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.795 nvme0n1 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.795 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.053 nvme0n1 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.053 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.054 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.054 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.054 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.311 nvme0n1 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.311 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.568 nvme0n1 00:25:00.568 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.568 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.568 17:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.568 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.568 17:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.568 nvme0n1 00:25:00.568 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.826 nvme0n1 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.826 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.085 nvme0n1 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.085 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.344 nvme0n1 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.344 17:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.603 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.861 nvme0n1 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.861 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.119 nvme0n1 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.119 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.378 nvme0n1 00:25:02.378 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.378 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.378 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.378 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.378 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.378 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.378 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.378 17:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.378 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.378 17:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.378 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.636 nvme0n1 00:25:02.636 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.636 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.636 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.636 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.636 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.636 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.636 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.636 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.636 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.636 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.896 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.896 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.896 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:02.896 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.896 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.896 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.896 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:02.896 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.897 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.155 nvme0n1 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.155 17:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.413 nvme0n1 00:25:03.413 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.413 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.413 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.413 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.413 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.413 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.413 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.413 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.413 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.413 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.674 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.934 nvme0n1 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.934 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.500 nvme0n1 00:25:04.500 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.500 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.500 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.500 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.501 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.501 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.501 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.501 17:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.501 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.501 17:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.501 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.758 nvme0n1 00:25:04.758 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.758 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.758 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.758 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.758 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.758 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.017 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.277 nvme0n1 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.277 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.278 17:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.846 nvme0n1 00:25:05.846 17:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.846 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.846 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.846 17:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.846 17:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.105 17:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.674 nvme0n1 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.674 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.242 nvme0n1 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.242 17:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.879 nvme0n1 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.879 17:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.446 nvme0n1 00:25:08.446 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.446 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.446 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.446 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.446 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.446 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.446 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.446 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.446 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.446 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:08.704 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.705 nvme0n1 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.705 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.964 nvme0n1 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.964 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.223 nvme0n1 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.223 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.482 nvme0n1 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.482 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.483 17:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.483 nvme0n1 00:25:09.483 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.483 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.483 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.483 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.483 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.483 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.742 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.742 nvme0n1 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.743 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.103 nvme0n1 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.103 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.361 nvme0n1 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.362 17:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.619 nvme0n1 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.619 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.620 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.620 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.620 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.620 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.620 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.620 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.620 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.620 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.620 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.620 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:10.620 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.620 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.878 nvme0n1 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.878 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.137 nvme0n1 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.137 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.396 nvme0n1 00:25:11.396 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.396 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.396 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.396 17:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.396 17:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.396 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.656 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.656 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.656 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.656 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.656 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.656 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.656 nvme0n1 00:25:11.656 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.656 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.656 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.656 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.656 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.656 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.916 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.176 nvme0n1 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.176 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:12.177 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.177 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.436 nvme0n1 00:25:12.436 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.436 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.436 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.436 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.436 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.436 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.436 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.436 17:07:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.436 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.436 17:07:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.436 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.005 nvme0n1 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.005 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.265 nvme0n1 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.265 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.524 17:07:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.524 17:07:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.524 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.524 17:07:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.783 nvme0n1 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.783 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.784 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.351 nvme0n1 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:14.351 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.352 17:07:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.611 nvme0n1 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiMGU1NGQ3NjBmMzUwOTY0MDViYjdjMDk3MDkyNDWPl9+N: 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: ]] 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjUwOTljMDU4YTUxNTcyNWFiYTc2ZWJlZTE4NjE0N2MwYmQzMmQ4YTQ0OWY4ZDlhNzFhNGZkZDJkOGZmYjMzNEPAO7Y=: 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.611 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.612 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.612 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.612 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.612 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.612 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.549 nvme0n1 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.549 17:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.117 nvme0n1 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDI0NjM1MWVkOGJjZWRhMDAwMjkyY2M3MjliZjAwNWK4uKmF: 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: ]] 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjgwNmNlYzJjYzEyNWYxNjYzYjM3ZDI3MjBhYzZjZmbE1cie: 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.117 17:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.689 nvme0n1 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmUzNTk5MDY2NzVjNjFkYzg4MThkNmNkMDE0ODM0ZWMyN2RiNTM2MGU1Y2M2ZTJhR4z47w==: 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: ]] 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ZjMmEwMjcwNzQ0MmI4ZGFmMmVjMGVmMDNjNzhiZWMLx43K: 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.689 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.257 nvme0n1 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzk1M2I1ZjYxZGQ4YzI0Y2FhZjZlNmExYWIzNDcwMDVlMDYxZTU0YTY1MzdmNDE0YmI5Njc1Y2U0M2E1MDEzY5I+rDE=: 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.257 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.258 17:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.826 nvme0n1 00:25:17.826 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.826 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.826 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.826 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.826 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.826 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.826 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.826 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.826 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.826 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMzYzcyNjk2ZTA0N2YwMGI3OTEyMzI4NWUzZDQ1NmY2YmJlNWQ1NDFjOGFhNGVl5tYWVA==: 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU4MmZhMWFjMmM5ZGY1ZjQ1OTNhNDI4MTRlYjFhNTUxZmI0NTI1NzlmMDJmYTUwJ8B7iA==: 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.086 request: 00:25:18.086 { 00:25:18.086 "name": "nvme0", 00:25:18.086 "trtype": "tcp", 00:25:18.086 "traddr": "10.0.0.1", 00:25:18.086 "adrfam": "ipv4", 00:25:18.086 "trsvcid": "4420", 00:25:18.086 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:18.086 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:18.086 "prchk_reftag": false, 00:25:18.086 "prchk_guard": false, 00:25:18.086 "hdgst": false, 00:25:18.086 "ddgst": false, 00:25:18.086 "method": "bdev_nvme_attach_controller", 00:25:18.086 "req_id": 1 00:25:18.086 } 00:25:18.086 Got JSON-RPC error response 00:25:18.086 response: 00:25:18.086 { 00:25:18.086 "code": -5, 00:25:18.086 "message": "Input/output error" 00:25:18.086 } 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.086 request: 00:25:18.086 { 00:25:18.086 "name": "nvme0", 00:25:18.086 "trtype": "tcp", 00:25:18.086 "traddr": "10.0.0.1", 00:25:18.086 "adrfam": "ipv4", 00:25:18.086 "trsvcid": "4420", 00:25:18.086 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:18.086 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:18.086 "prchk_reftag": false, 00:25:18.086 "prchk_guard": false, 00:25:18.086 "hdgst": false, 00:25:18.086 "ddgst": false, 00:25:18.086 "dhchap_key": "key2", 00:25:18.086 "method": "bdev_nvme_attach_controller", 00:25:18.086 "req_id": 1 00:25:18.086 } 00:25:18.086 Got JSON-RPC error response 00:25:18.086 response: 00:25:18.086 { 00:25:18.086 "code": -5, 00:25:18.086 "message": "Input/output error" 00:25:18.086 } 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.086 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.087 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.346 request: 00:25:18.346 { 00:25:18.346 "name": "nvme0", 00:25:18.346 "trtype": "tcp", 00:25:18.346 "traddr": "10.0.0.1", 00:25:18.346 "adrfam": "ipv4", 00:25:18.346 "trsvcid": "4420", 00:25:18.346 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:18.346 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:18.346 "prchk_reftag": false, 00:25:18.346 "prchk_guard": false, 00:25:18.346 "hdgst": false, 00:25:18.346 "ddgst": false, 00:25:18.346 "dhchap_key": "key1", 00:25:18.346 "dhchap_ctrlr_key": "ckey2", 00:25:18.346 "method": "bdev_nvme_attach_controller", 00:25:18.346 "req_id": 1 00:25:18.346 } 00:25:18.346 Got JSON-RPC error response 00:25:18.346 response: 00:25:18.346 { 00:25:18.346 "code": -5, 00:25:18.346 "message": "Input/output error" 00:25:18.346 } 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:18.346 rmmod nvme_tcp 00:25:18.346 rmmod nvme_fabrics 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 209645 ']' 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 209645 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 209645 ']' 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 209645 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 209645 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 209645' 00:25:18.346 killing process with pid 209645 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 209645 00:25:18.346 17:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 209645 00:25:18.606 17:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:18.606 17:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:18.606 17:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:18.606 17:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.606 17:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:18.606 17:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.606 17:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.606 17:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.542 17:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:20.542 17:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:20.543 17:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:20.543 17:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:20.543 17:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:20.543 17:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:20.543 17:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:20.543 17:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:20.543 17:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:20.543 17:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:20.543 17:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:20.543 17:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:20.543 17:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:23.077 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:23.077 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:23.077 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:23.077 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:23.077 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:23.077 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:23.077 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:23.077 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:23.336 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:23.336 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:23.336 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:23.337 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:23.337 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:23.337 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:23.337 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:23.337 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:24.274 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:24.274 17:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.2wc /tmp/spdk.key-null.IwU /tmp/spdk.key-sha256.LLc /tmp/spdk.key-sha384.e2f /tmp/spdk.key-sha512.DQ1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:24.274 17:07:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:26.806 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:26.806 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:26.806 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:26.806 00:25:26.806 real 0m49.141s 00:25:26.806 user 0m43.972s 00:25:26.806 sys 0m11.726s 00:25:26.806 17:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:26.806 17:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.806 ************************************ 00:25:26.806 END TEST nvmf_auth_host 00:25:26.806 ************************************ 00:25:26.806 17:07:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:26.806 17:07:33 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:25:26.806 17:07:33 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:26.807 17:07:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:26.807 17:07:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:26.807 17:07:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:26.807 ************************************ 00:25:26.807 START TEST nvmf_digest 00:25:26.807 ************************************ 00:25:26.807 17:07:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:26.807 * Looking for test storage... 00:25:26.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:26.807 17:07:33 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.807 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:26.807 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.807 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.807 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.807 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.807 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.807 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:27.064 17:07:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:32.333 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:32.334 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:32.334 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:32.334 Found net devices under 0000:86:00.0: cvl_0_0 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:32.334 Found net devices under 0000:86:00.1: cvl_0_1 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:32.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:25:32.334 00:25:32.334 --- 10.0.0.2 ping statistics --- 00:25:32.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.334 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:25:32.334 00:25:32.334 --- 10.0.0.1 ping statistics --- 00:25:32.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.334 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:32.334 ************************************ 00:25:32.334 START TEST nvmf_digest_clean 00:25:32.334 ************************************ 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=222782 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 222782 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 222782 ']' 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:32.334 17:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:32.334 [2024-07-15 17:07:38.753950] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:25:32.335 [2024-07-15 17:07:38.753994] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.335 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.335 [2024-07-15 17:07:38.815392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.335 [2024-07-15 17:07:38.888481] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.335 [2024-07-15 17:07:38.888522] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.335 [2024-07-15 17:07:38.888529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.335 [2024-07-15 17:07:38.888535] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.335 [2024-07-15 17:07:38.888540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.335 [2024-07-15 17:07:38.888558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.903 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:32.903 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:32.903 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:32.903 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:32.903 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.162 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.162 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:33.162 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:33.162 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:33.162 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.162 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.162 null0 00:25:33.162 [2024-07-15 17:07:39.680624] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.162 [2024-07-15 17:07:39.704798] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.162 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.162 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:33.162 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:33.162 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:33.162 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:33.163 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:33.163 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:33.163 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:33.163 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=223028 00:25:33.163 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 223028 /var/tmp/bperf.sock 00:25:33.163 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:33.163 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 223028 ']' 00:25:33.163 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:33.163 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:33.163 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:33.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:33.163 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:33.163 17:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.163 [2024-07-15 17:07:39.754755] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:25:33.163 [2024-07-15 17:07:39.754797] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223028 ] 00:25:33.163 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.163 [2024-07-15 17:07:39.816217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.421 [2024-07-15 17:07:39.921049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.989 17:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:33.989 17:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:33.989 17:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:33.989 17:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:33.989 17:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:34.249 17:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:34.249 17:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:34.817 nvme0n1 00:25:34.817 17:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:34.817 17:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:34.817 Running I/O for 2 seconds... 00:25:36.724 00:25:36.724 Latency(us) 00:25:36.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.724 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:36.724 nvme0n1 : 2.00 26917.95 105.15 0.00 0.00 4750.35 2507.46 17666.23 00:25:36.724 =================================================================================================================== 00:25:36.724 Total : 26917.95 105.15 0.00 0.00 4750.35 2507.46 17666.23 00:25:36.724 0 00:25:36.724 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:36.724 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:36.724 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:36.724 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:36.724 | select(.opcode=="crc32c") 00:25:36.724 | "\(.module_name) \(.executed)"' 00:25:36.724 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 223028 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 223028 ']' 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 223028 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 223028 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 223028' 00:25:36.984 killing process with pid 223028 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 223028 00:25:36.984 Received shutdown signal, test time was about 2.000000 seconds 00:25:36.984 00:25:36.984 Latency(us) 00:25:36.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.984 =================================================================================================================== 00:25:36.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:36.984 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 223028 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=223680 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 223680 /var/tmp/bperf.sock 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 223680 ']' 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:37.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:37.244 17:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:37.244 [2024-07-15 17:07:43.804699] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:25:37.244 [2024-07-15 17:07:43.804750] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223680 ] 00:25:37.244 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:37.244 Zero copy mechanism will not be used. 00:25:37.244 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.244 [2024-07-15 17:07:43.859114] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.529 [2024-07-15 17:07:43.939685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.098 17:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:38.098 17:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:38.098 17:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:38.098 17:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:38.098 17:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:38.358 17:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:38.358 17:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:38.620 nvme0n1 00:25:38.620 17:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:38.620 17:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:38.879 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:38.879 Zero copy mechanism will not be used. 00:25:38.879 Running I/O for 2 seconds... 00:25:40.787 00:25:40.787 Latency(us) 00:25:40.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.787 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:40.787 nvme0n1 : 2.00 4524.85 565.61 0.00 0.00 3533.48 812.08 12423.35 00:25:40.787 =================================================================================================================== 00:25:40.787 Total : 4524.85 565.61 0.00 0.00 3533.48 812.08 12423.35 00:25:40.787 0 00:25:40.787 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:40.787 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:40.787 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:40.787 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:40.787 | select(.opcode=="crc32c") 00:25:40.787 | "\(.module_name) \(.executed)"' 00:25:40.787 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 223680 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 223680 ']' 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 223680 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 223680 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 223680' 00:25:41.047 killing process with pid 223680 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 223680 00:25:41.047 Received shutdown signal, test time was about 2.000000 seconds 00:25:41.047 00:25:41.047 Latency(us) 00:25:41.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.047 =================================================================================================================== 00:25:41.047 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.047 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 223680 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=224218 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 224218 /var/tmp/bperf.sock 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 224218 ']' 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:41.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.307 17:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:41.307 [2024-07-15 17:07:47.783077] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:25:41.307 [2024-07-15 17:07:47.783126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224218 ] 00:25:41.307 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.307 [2024-07-15 17:07:47.837746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.307 [2024-07-15 17:07:47.905571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.246 17:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.246 17:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:42.246 17:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:42.246 17:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:42.246 17:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:42.246 17:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.247 17:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.866 nvme0n1 00:25:42.866 17:07:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:42.866 17:07:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:42.866 Running I/O for 2 seconds... 00:25:44.787 00:25:44.787 Latency(us) 00:25:44.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.787 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:44.787 nvme0n1 : 2.00 26844.90 104.86 0.00 0.00 4759.89 3419.27 14303.94 00:25:44.787 =================================================================================================================== 00:25:44.787 Total : 26844.90 104.86 0.00 0.00 4759.89 3419.27 14303.94 00:25:44.787 0 00:25:44.787 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:44.787 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:44.787 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:44.787 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:44.787 | select(.opcode=="crc32c") 00:25:44.787 | "\(.module_name) \(.executed)"' 00:25:44.787 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 224218 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 224218 ']' 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 224218 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 224218 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 224218' 00:25:45.045 killing process with pid 224218 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 224218 00:25:45.045 Received shutdown signal, test time was about 2.000000 seconds 00:25:45.045 00:25:45.045 Latency(us) 00:25:45.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.045 =================================================================================================================== 00:25:45.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:45.045 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 224218 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=224907 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 224907 /var/tmp/bperf.sock 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 224907 ']' 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:45.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:45.304 17:07:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:45.304 [2024-07-15 17:07:51.805161] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:25:45.304 [2024-07-15 17:07:51.805211] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224907 ] 00:25:45.304 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:45.304 Zero copy mechanism will not be used. 00:25:45.304 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.304 [2024-07-15 17:07:51.859063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.304 [2024-07-15 17:07:51.926750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.237 17:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.237 17:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:46.237 17:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:46.237 17:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:46.237 17:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:46.237 17:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:46.237 17:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:46.803 nvme0n1 00:25:46.803 17:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:46.803 17:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:46.803 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:46.803 Zero copy mechanism will not be used. 00:25:46.803 Running I/O for 2 seconds... 00:25:48.702 00:25:48.702 Latency(us) 00:25:48.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.702 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:48.702 nvme0n1 : 2.00 6407.29 800.91 0.00 0.00 2493.15 1731.01 6924.02 00:25:48.702 =================================================================================================================== 00:25:48.702 Total : 6407.29 800.91 0.00 0.00 2493.15 1731.01 6924.02 00:25:48.702 0 00:25:48.702 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:48.702 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:48.702 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:48.702 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:48.702 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:48.702 | select(.opcode=="crc32c") 00:25:48.702 | "\(.module_name) \(.executed)"' 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 224907 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 224907 ']' 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 224907 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 224907 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 224907' 00:25:48.960 killing process with pid 224907 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 224907 00:25:48.960 Received shutdown signal, test time was about 2.000000 seconds 00:25:48.960 00:25:48.960 Latency(us) 00:25:48.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.960 =================================================================================================================== 00:25:48.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:48.960 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 224907 00:25:49.218 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 222782 00:25:49.218 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 222782 ']' 00:25:49.218 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 222782 00:25:49.218 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:49.218 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.218 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 222782 00:25:49.218 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:49.218 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:49.218 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 222782' 00:25:49.218 killing process with pid 222782 00:25:49.218 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 222782 00:25:49.218 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 222782 00:25:49.476 00:25:49.476 real 0m17.264s 00:25:49.476 user 0m32.984s 00:25:49.476 sys 0m4.548s 00:25:49.476 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:49.476 17:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:49.476 ************************************ 00:25:49.476 END TEST nvmf_digest_clean 00:25:49.476 ************************************ 00:25:49.476 17:07:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:49.476 17:07:55 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:49.476 17:07:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:49.476 17:07:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:49.476 17:07:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:49.476 ************************************ 00:25:49.476 START TEST nvmf_digest_error 00:25:49.476 ************************************ 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=225633 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 225633 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 225633 ']' 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.476 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:49.476 [2024-07-15 17:07:56.079053] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:25:49.476 [2024-07-15 17:07:56.079093] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.476 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.476 [2024-07-15 17:07:56.136497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.734 [2024-07-15 17:07:56.204162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.734 [2024-07-15 17:07:56.204203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.734 [2024-07-15 17:07:56.204210] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.734 [2024-07-15 17:07:56.204216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.734 [2024-07-15 17:07:56.204221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.735 [2024-07-15 17:07:56.204260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.303 [2024-07-15 17:07:56.914311] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.303 17:07:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.561 null0 00:25:50.561 [2024-07-15 17:07:57.008585] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.561 [2024-07-15 17:07:57.032751] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=225878 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 225878 /var/tmp/bperf.sock 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 225878 ']' 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:50.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.561 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.561 [2024-07-15 17:07:57.082326] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:25:50.561 [2024-07-15 17:07:57.082367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid225878 ] 00:25:50.561 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.561 [2024-07-15 17:07:57.136769] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.561 [2024-07-15 17:07:57.215956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.495 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:51.495 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:51.495 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:51.495 17:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:51.495 17:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:51.495 17:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.495 17:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.495 17:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.495 17:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.495 17:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.062 nvme0n1 00:25:52.062 17:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:52.062 17:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.062 17:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.062 17:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.062 17:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:52.062 17:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:52.062 Running I/O for 2 seconds... 00:25:52.062 [2024-07-15 17:07:58.566119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.566152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.062 [2024-07-15 17:07:58.566162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.062 [2024-07-15 17:07:58.578355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.578380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.062 [2024-07-15 17:07:58.578389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.062 [2024-07-15 17:07:58.587952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.587975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.062 [2024-07-15 17:07:58.587984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.062 [2024-07-15 17:07:58.596468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.596490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.062 [2024-07-15 17:07:58.596498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.062 [2024-07-15 17:07:58.606435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.606455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.062 [2024-07-15 17:07:58.606463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.062 [2024-07-15 17:07:58.616140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.616160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.062 [2024-07-15 17:07:58.616168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.062 [2024-07-15 17:07:58.626007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.626028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.062 [2024-07-15 17:07:58.626035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.062 [2024-07-15 17:07:58.635294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.635313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.062 [2024-07-15 17:07:58.635324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.062 [2024-07-15 17:07:58.644605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.644624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.062 [2024-07-15 17:07:58.644631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.062 [2024-07-15 17:07:58.652438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.652457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.062 [2024-07-15 17:07:58.652465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.062 [2024-07-15 17:07:58.663380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.663399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.062 [2024-07-15 17:07:58.663407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.062 [2024-07-15 17:07:58.672483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.672501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.062 [2024-07-15 17:07:58.672509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.062 [2024-07-15 17:07:58.682726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.682746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.062 [2024-07-15 17:07:58.682753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.062 [2024-07-15 17:07:58.692559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.062 [2024-07-15 17:07:58.692578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.063 [2024-07-15 17:07:58.692586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.063 [2024-07-15 17:07:58.702665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.063 [2024-07-15 17:07:58.702684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.063 [2024-07-15 17:07:58.702692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.063 [2024-07-15 17:07:58.711136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.063 [2024-07-15 17:07:58.711155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.063 [2024-07-15 17:07:58.711163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.063 [2024-07-15 17:07:58.720427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.063 [2024-07-15 17:07:58.720449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.063 [2024-07-15 17:07:58.720457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.732478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.732500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.732508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.741401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.741421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.741429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.749591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.749611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.749618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.760082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.760102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.760109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.769845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.769865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.769873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.779323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.779342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.779350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.788311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.788330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.788338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.796881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.796900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.796908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.807009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.807029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.807037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.817473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.817492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.817500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.826445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.826465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.826473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.837453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.837474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.837481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.845975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.845996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.846004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.856606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.856625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.856633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.865958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.865978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.865985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.874433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.874453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.874460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.884299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.884319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.884330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.893715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.893735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.893743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.902824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.902843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.902851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.911448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.911468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.911476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.922031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.922051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.922059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.930933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.930953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.930960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.940539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.940557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.940565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.949571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.949591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.949599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.959822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.959842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.959850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.967538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.323 [2024-07-15 17:07:58.967561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.323 [2024-07-15 17:07:58.967569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.323 [2024-07-15 17:07:58.977524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.324 [2024-07-15 17:07:58.977544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.324 [2024-07-15 17:07:58.977552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.324 [2024-07-15 17:07:58.987569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.324 [2024-07-15 17:07:58.987590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.324 [2024-07-15 17:07:58.987598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:58.997426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:58.997448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:58.997456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.005850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.005871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.005879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.015558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.015579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.015588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.024890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.024909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.024917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.034534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.034554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.034563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.045600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.045621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.045635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.056902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.056923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.056931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.067660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.067681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.067689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.076275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.076297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.076305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.086105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.086126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.086133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.094982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.095004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.095012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.106245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.106265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.106273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.116821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.116841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.116849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.125730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.125750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.125757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.135945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.135969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.135977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.145720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.145740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.145747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.155158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.155178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.155186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.164729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.164749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.164757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.174481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.174501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.174509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.184767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.184787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.184795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.193439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.193458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.193466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.204999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.205019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.205027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.214365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.583 [2024-07-15 17:07:59.214385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-07-15 17:07:59.214392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.583 [2024-07-15 17:07:59.223344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.584 [2024-07-15 17:07:59.223363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 17:07:59.223371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 17:07:59.232133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.584 [2024-07-15 17:07:59.232152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 17:07:59.232160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 17:07:59.241577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.584 [2024-07-15 17:07:59.241597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 17:07:59.241605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.251359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.251383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.251391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.260905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.260925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.260933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.270359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.270380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.270388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.279618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.279639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.279646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.289034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.289053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.289061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.298981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.299001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.299013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.308245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.308265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.308272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.317861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.317881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.317889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.326195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.326215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.326223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.335850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.335869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.335877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.345267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.345288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.345295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.355541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.355561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.355568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.363929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.363949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.363956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.374503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.374524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.374531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.383824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.383848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.383856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.392590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.392610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.392619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.402813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.843 [2024-07-15 17:07:59.402833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-15 17:07:59.402840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.843 [2024-07-15 17:07:59.411093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.844 [2024-07-15 17:07:59.411113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.844 [2024-07-15 17:07:59.411122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.844 [2024-07-15 17:07:59.421600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.844 [2024-07-15 17:07:59.421620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.844 [2024-07-15 17:07:59.421628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.844 [2024-07-15 17:07:59.431783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.844 [2024-07-15 17:07:59.431803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.844 [2024-07-15 17:07:59.431811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.844 [2024-07-15 17:07:59.440512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.844 [2024-07-15 17:07:59.440531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.844 [2024-07-15 17:07:59.440539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.844 [2024-07-15 17:07:59.450313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.844 [2024-07-15 17:07:59.450334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.844 [2024-07-15 17:07:59.450342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.844 [2024-07-15 17:07:59.460202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.844 [2024-07-15 17:07:59.460222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.844 [2024-07-15 17:07:59.460238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.844 [2024-07-15 17:07:59.469610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.844 [2024-07-15 17:07:59.469629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.844 [2024-07-15 17:07:59.469637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.844 [2024-07-15 17:07:59.478532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.844 [2024-07-15 17:07:59.478551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.844 [2024-07-15 17:07:59.478559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.844 [2024-07-15 17:07:59.488646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.844 [2024-07-15 17:07:59.488665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.844 [2024-07-15 17:07:59.488673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.844 [2024-07-15 17:07:59.497983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.844 [2024-07-15 17:07:59.498002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.844 [2024-07-15 17:07:59.498010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.844 [2024-07-15 17:07:59.507037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:52.844 [2024-07-15 17:07:59.507056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.844 [2024-07-15 17:07:59.507064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.516265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.516285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.516293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.526544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.526564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.526571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.535206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.535232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.535240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.544618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.544640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.544647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.554151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.554171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.554178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.563601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.563620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.563628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.573727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.573747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.573754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.582088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.582107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.582115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.592739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.592758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.592766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.602790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.602809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.602817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.612136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.612155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.612163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.621019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.621039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.621046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.631790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.631809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.631817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.641145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.641170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.641178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.651124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.651143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.651151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.659363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.659382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.659390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.669560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.669579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.669587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.679789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.679849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.679857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.688180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.688203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.688210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.698115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.698134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.698142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.708289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.708309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.708320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.718029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.718049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.718057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.726093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.726112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.726120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.104 [2024-07-15 17:07:59.736987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.104 [2024-07-15 17:07:59.737007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.104 [2024-07-15 17:07:59.737014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.105 [2024-07-15 17:07:59.746039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.105 [2024-07-15 17:07:59.746057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.105 [2024-07-15 17:07:59.746065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.105 [2024-07-15 17:07:59.754592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.105 [2024-07-15 17:07:59.754611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.105 [2024-07-15 17:07:59.754619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.105 [2024-07-15 17:07:59.765384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.105 [2024-07-15 17:07:59.765403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.105 [2024-07-15 17:07:59.765412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.364 [2024-07-15 17:07:59.775455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.364 [2024-07-15 17:07:59.775476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.364 [2024-07-15 17:07:59.775485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.364 [2024-07-15 17:07:59.784300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.364 [2024-07-15 17:07:59.784320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.364 [2024-07-15 17:07:59.784328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.364 [2024-07-15 17:07:59.794344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.364 [2024-07-15 17:07:59.794367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.364 [2024-07-15 17:07:59.794376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.364 [2024-07-15 17:07:59.803674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.364 [2024-07-15 17:07:59.803693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.364 [2024-07-15 17:07:59.803700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.364 [2024-07-15 17:07:59.812299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.364 [2024-07-15 17:07:59.812322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.364 [2024-07-15 17:07:59.812329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.364 [2024-07-15 17:07:59.822462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.364 [2024-07-15 17:07:59.822482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.364 [2024-07-15 17:07:59.822490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.364 [2024-07-15 17:07:59.831378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.831397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.831405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.842886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.842905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.842913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.851945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.851965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.851972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.860846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.860865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.860873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.870833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.870852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.870860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.880265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.880285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.880292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.890286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.890306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.890314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.899399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.899419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.899427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.909703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.909722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.909729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.917524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.917543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.917551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.927499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.927518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.927526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.937378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.937399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.937407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.946164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.946183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.946191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.956554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.956575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.956583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.965198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.965217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.965230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.976501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.976520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.976528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.985411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.985430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.985437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:07:59.993916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:07:59.993934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:07:59.993943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:08:00.003594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:08:00.003615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:08:00.003623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.365 [2024-07-15 17:08:00.013912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.365 [2024-07-15 17:08:00.013931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.365 [2024-07-15 17:08:00.013940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.366 [2024-07-15 17:08:00.022256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.366 [2024-07-15 17:08:00.022275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.366 [2024-07-15 17:08:00.022283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.650 [2024-07-15 17:08:00.033969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.650 [2024-07-15 17:08:00.033990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.650 [2024-07-15 17:08:00.033999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.650 [2024-07-15 17:08:00.043610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.650 [2024-07-15 17:08:00.043631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.650 [2024-07-15 17:08:00.043638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.650 [2024-07-15 17:08:00.052300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.650 [2024-07-15 17:08:00.052321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.650 [2024-07-15 17:08:00.052329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.650 [2024-07-15 17:08:00.061735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.650 [2024-07-15 17:08:00.061755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.650 [2024-07-15 17:08:00.061763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.650 [2024-07-15 17:08:00.072195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.650 [2024-07-15 17:08:00.072215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.650 [2024-07-15 17:08:00.072223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.081396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.081417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.081425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.089739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.089760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.089768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.101901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.101921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.101929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.110053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.110073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.110081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.120568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.120589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.120599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.131482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.131502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.131510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.143792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.143812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.143820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.152401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.152422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.152430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.161411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.161431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.161438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.172604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.172624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.172631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.180616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.180635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.180643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.192190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.192210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.192217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.200063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.200081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.200089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.210019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.210041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.210049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.219631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.219651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.219658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.229365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.229384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.229392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.237275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.237295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.237303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.247703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.247723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.247730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.257889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.257909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.257917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.267140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.267159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.267167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.275874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.275893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.275901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.285153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.285173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.285180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.295344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.295363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.295371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.303879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.303899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.303907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.651 [2024-07-15 17:08:00.314727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.651 [2024-07-15 17:08:00.314747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.651 [2024-07-15 17:08:00.314754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.911 [2024-07-15 17:08:00.324529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.911 [2024-07-15 17:08:00.324551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.911 [2024-07-15 17:08:00.324559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.911 [2024-07-15 17:08:00.332445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.911 [2024-07-15 17:08:00.332465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.911 [2024-07-15 17:08:00.332473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.911 [2024-07-15 17:08:00.342386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.911 [2024-07-15 17:08:00.342406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.911 [2024-07-15 17:08:00.342414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.911 [2024-07-15 17:08:00.354195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.911 [2024-07-15 17:08:00.354214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.911 [2024-07-15 17:08:00.354222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.911 [2024-07-15 17:08:00.362703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.911 [2024-07-15 17:08:00.362723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.911 [2024-07-15 17:08:00.362731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.911 [2024-07-15 17:08:00.374863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.911 [2024-07-15 17:08:00.374886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.911 [2024-07-15 17:08:00.374895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.911 [2024-07-15 17:08:00.383803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.911 [2024-07-15 17:08:00.383823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.911 [2024-07-15 17:08:00.383831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.911 [2024-07-15 17:08:00.392651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.911 [2024-07-15 17:08:00.392672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.392681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.403659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.403678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.403686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.413213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.413237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.413245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.422552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.422571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.422579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.431508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.431530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.431540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.441652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.441672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.441680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.450027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.450046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.450054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.460307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.460327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.460335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.470054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.470075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.470083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.479610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.479630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.479637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.489632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.489653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.489660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.497688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.497709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.497717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.507807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.507829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.507837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.517871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.517892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.517900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.526330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.526350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.526359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.537490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.537512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.537523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.546065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.546085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.546093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 [2024-07-15 17:08:00.555882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d8f20) 00:25:53.912 [2024-07-15 17:08:00.555902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.912 [2024-07-15 17:08:00.555910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.912 00:25:53.912 Latency(us) 00:25:53.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.912 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:53.912 nvme0n1 : 2.01 26533.11 103.64 0.00 0.00 4819.46 2421.98 13164.19 00:25:53.912 =================================================================================================================== 00:25:53.912 Total : 26533.11 103.64 0.00 0.00 4819.46 2421.98 13164.19 00:25:53.912 0 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:54.172 | .driver_specific 00:25:54.172 | .nvme_error 00:25:54.172 | .status_code 00:25:54.172 | .command_transient_transport_error' 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 225878 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 225878 ']' 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 225878 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 225878 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 225878' 00:25:54.172 killing process with pid 225878 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 225878 00:25:54.172 Received shutdown signal, test time was about 2.000000 seconds 00:25:54.172 00:25:54.172 Latency(us) 00:25:54.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.172 =================================================================================================================== 00:25:54.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.172 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 225878 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=226570 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 226570 /var/tmp/bperf.sock 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 226570 ']' 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:54.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:54.432 17:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.432 [2024-07-15 17:08:01.043852] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:25:54.432 [2024-07-15 17:08:01.043901] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid226570 ] 00:25:54.432 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:54.432 Zero copy mechanism will not be used. 00:25:54.432 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.432 [2024-07-15 17:08:01.097397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.691 [2024-07-15 17:08:01.170455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.259 17:08:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:55.259 17:08:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:55.259 17:08:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:55.259 17:08:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:55.517 17:08:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:55.517 17:08:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.517 17:08:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.517 17:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.517 17:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.518 17:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.776 nvme0n1 00:25:55.776 17:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:55.776 17:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.776 17:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.776 17:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.776 17:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:55.776 17:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:55.776 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:55.776 Zero copy mechanism will not be used. 00:25:55.776 Running I/O for 2 seconds... 00:25:55.776 [2024-07-15 17:08:02.408441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:55.776 [2024-07-15 17:08:02.408473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.776 [2024-07-15 17:08:02.408483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.776 [2024-07-15 17:08:02.416805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:55.776 [2024-07-15 17:08:02.416830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.776 [2024-07-15 17:08:02.416838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.776 [2024-07-15 17:08:02.424644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:55.776 [2024-07-15 17:08:02.424666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.776 [2024-07-15 17:08:02.424675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.776 [2024-07-15 17:08:02.432089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:55.776 [2024-07-15 17:08:02.432109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.776 [2024-07-15 17:08:02.432117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.776 [2024-07-15 17:08:02.438819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:55.776 [2024-07-15 17:08:02.438839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.776 [2024-07-15 17:08:02.438847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.445279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.445298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.445306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.451646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.451665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.451673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.457939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.457959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.457967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.464198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.464218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.464231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.470339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.470359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.470367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.476363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.476382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.476390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.481745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.481766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.481773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.487898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.487918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.487926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.494097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.494118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.494125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.500513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.500535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.500543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.507015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.507037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.507048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.513475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.513496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.513504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.519677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.519697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.519705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.525853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.525872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.525880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.532111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.532132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.532142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.538220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.538246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.538254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.544415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.544435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.544443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.550119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.550140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.550148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.557523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.557543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.557551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.566389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.566412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.566420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.574626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.574647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.574655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.582379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.582399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.582407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.590490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.590511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.590519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.037 [2024-07-15 17:08:02.598495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.037 [2024-07-15 17:08:02.598517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.037 [2024-07-15 17:08:02.598525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.038 [2024-07-15 17:08:02.606464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.038 [2024-07-15 17:08:02.606486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.038 [2024-07-15 17:08:02.606494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.038 [2024-07-15 17:08:02.613933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.038 [2024-07-15 17:08:02.613953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.038 [2024-07-15 17:08:02.613961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.038 [2024-07-15 17:08:02.622772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.038 [2024-07-15 17:08:02.622793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.038 [2024-07-15 17:08:02.622801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.038 [2024-07-15 17:08:02.631768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.038 [2024-07-15 17:08:02.631789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.038 [2024-07-15 17:08:02.631798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.038 [2024-07-15 17:08:02.640876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.038 [2024-07-15 17:08:02.640896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.038 [2024-07-15 17:08:02.640904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.038 [2024-07-15 17:08:02.650843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.038 [2024-07-15 17:08:02.650864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.038 [2024-07-15 17:08:02.650872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.038 [2024-07-15 17:08:02.660165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.038 [2024-07-15 17:08:02.660186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.038 [2024-07-15 17:08:02.660194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.038 [2024-07-15 17:08:02.668839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.038 [2024-07-15 17:08:02.668860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.038 [2024-07-15 17:08:02.668868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.038 [2024-07-15 17:08:02.677463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.038 [2024-07-15 17:08:02.677483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.038 [2024-07-15 17:08:02.677491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.038 [2024-07-15 17:08:02.685392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.038 [2024-07-15 17:08:02.685412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.038 [2024-07-15 17:08:02.685420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.038 [2024-07-15 17:08:02.692783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.038 [2024-07-15 17:08:02.692803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.038 [2024-07-15 17:08:02.692810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.038 [2024-07-15 17:08:02.699912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.038 [2024-07-15 17:08:02.699933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.038 [2024-07-15 17:08:02.699941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.298 [2024-07-15 17:08:02.706165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.298 [2024-07-15 17:08:02.706186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-07-15 17:08:02.706196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.298 [2024-07-15 17:08:02.712732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.298 [2024-07-15 17:08:02.712752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.298 [2024-07-15 17:08:02.712760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.298 [2024-07-15 17:08:02.718630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.298 [2024-07-15 17:08:02.718651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.718659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.724681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.724701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.724708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.733777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.733797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.733805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.742039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.742059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.742067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.749985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.750005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.750013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.757523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.757544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.757552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.764501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.764520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.764528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.771136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.771159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.771166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.777366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.777388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.777395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.783741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.783761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.783769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.790015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.790036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.790044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.795848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.795869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.795877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.804823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.804843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.804850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.813319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.813339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.813347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.821454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.821475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.821482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.829009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.829029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.829036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.836119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.836139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.836146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.842705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.842724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.842731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.849084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.849104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.849112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.855194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.855214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.855221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.861814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.861833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.861841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.868027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.868048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.868055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.873664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.873684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.873692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.879328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.879348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.879355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.885166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.885186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.885197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.891148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.891168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.891176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.897001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.897021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.897029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.905330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.905349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.905357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.914100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.914120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.299 [2024-07-15 17:08:02.914128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.299 [2024-07-15 17:08:02.921989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.299 [2024-07-15 17:08:02.922009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-07-15 17:08:02.922017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.300 [2024-07-15 17:08:02.929529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.300 [2024-07-15 17:08:02.929548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-07-15 17:08:02.929556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.300 [2024-07-15 17:08:02.936606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.300 [2024-07-15 17:08:02.936626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-07-15 17:08:02.936634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.300 [2024-07-15 17:08:02.943689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.300 [2024-07-15 17:08:02.943710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-07-15 17:08:02.943717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.300 [2024-07-15 17:08:02.949995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.300 [2024-07-15 17:08:02.950019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-07-15 17:08:02.950027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.300 [2024-07-15 17:08:02.956060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.300 [2024-07-15 17:08:02.956080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-07-15 17:08:02.956088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.300 [2024-07-15 17:08:02.962619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.300 [2024-07-15 17:08:02.962639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.300 [2024-07-15 17:08:02.962647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.597 [2024-07-15 17:08:02.968525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.597 [2024-07-15 17:08:02.968545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.597 [2024-07-15 17:08:02.968553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.597 [2024-07-15 17:08:02.977291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.597 [2024-07-15 17:08:02.977312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.597 [2024-07-15 17:08:02.977320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.597 [2024-07-15 17:08:02.986106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.597 [2024-07-15 17:08:02.986128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.597 [2024-07-15 17:08:02.986136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.597 [2024-07-15 17:08:02.995014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.597 [2024-07-15 17:08:02.995035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.597 [2024-07-15 17:08:02.995042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.597 [2024-07-15 17:08:03.002427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.597 [2024-07-15 17:08:03.002447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.597 [2024-07-15 17:08:03.002454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.597 [2024-07-15 17:08:03.009403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.597 [2024-07-15 17:08:03.009423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.597 [2024-07-15 17:08:03.009430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.597 [2024-07-15 17:08:03.016012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.597 [2024-07-15 17:08:03.016034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.597 [2024-07-15 17:08:03.016042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.597 [2024-07-15 17:08:03.023034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.597 [2024-07-15 17:08:03.023054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.597 [2024-07-15 17:08:03.023062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.597 [2024-07-15 17:08:03.029241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.597 [2024-07-15 17:08:03.029262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.597 [2024-07-15 17:08:03.029270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.597 [2024-07-15 17:08:03.035686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.597 [2024-07-15 17:08:03.035707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.597 [2024-07-15 17:08:03.035715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.597 [2024-07-15 17:08:03.041879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.597 [2024-07-15 17:08:03.041900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.597 [2024-07-15 17:08:03.041908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.597 [2024-07-15 17:08:03.047994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.597 [2024-07-15 17:08:03.048015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.597 [2024-07-15 17:08:03.048023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.597 [2024-07-15 17:08:03.054363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.597 [2024-07-15 17:08:03.054383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.597 [2024-07-15 17:08:03.054391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.060474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.060495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.060502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.066599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.066619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.066630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.073836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.073858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.073866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.082683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.082704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.082712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.091024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.091044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.091051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.098392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.098413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.098421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.105479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.105499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.105506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.112239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.112259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.112267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.119165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.119186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.119194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.125876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.125897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.125904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.134008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.134028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.134035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.142737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.142758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.142765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.150556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.150576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.150584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.158141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.158161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.158168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.164730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.164749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.164756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.171268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.171288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.171296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.177877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.177898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.177905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.185517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.185537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.185545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.194166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.194188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.194195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.201962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.201982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.201990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.209203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.209223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.209239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.216186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.216207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.216214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.223300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.223320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.223328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.229705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.229725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.229733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.235636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.235657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.235665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.241339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.241359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.241367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.247284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.247304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.247311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.598 [2024-07-15 17:08:03.253407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.598 [2024-07-15 17:08:03.253428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.598 [2024-07-15 17:08:03.253439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.857 [2024-07-15 17:08:03.259285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.857 [2024-07-15 17:08:03.259305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.857 [2024-07-15 17:08:03.259313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.857 [2024-07-15 17:08:03.266498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.857 [2024-07-15 17:08:03.266518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.857 [2024-07-15 17:08:03.266526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.857 [2024-07-15 17:08:03.275090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.857 [2024-07-15 17:08:03.275110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.857 [2024-07-15 17:08:03.275118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.857 [2024-07-15 17:08:03.283251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.857 [2024-07-15 17:08:03.283271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.857 [2024-07-15 17:08:03.283279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.857 [2024-07-15 17:08:03.290399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.857 [2024-07-15 17:08:03.290419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.857 [2024-07-15 17:08:03.290427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.297386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.297406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.297414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.304247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.304266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.304274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.310725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.310746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.310753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.316837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.316857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.316866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.322763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.322784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.322792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.328699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.328721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.328729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.334588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.334610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.334618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.340372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.340393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.340402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.346280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.346301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.346309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.352020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.352042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.352050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.357785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.357806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.357815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.363527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.363552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.363565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.369238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.369260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.369268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.375020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.375043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.375050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.381058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.381080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.381088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.386820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.386839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.386847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.392483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.392504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.392512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.398220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.398248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.398256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.403954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.403976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.403984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.409541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.409564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.409572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.415115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.415139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.415148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.420619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.420641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.420649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.426221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.426249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.426257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.431876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.431897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.431905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.437558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.437580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.437588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.443175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.443196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.443204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.448820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.448842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.448850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.454427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.454448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.454456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.460189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.460210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.460218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.465994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.466015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.466024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.471698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.471718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.471726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.477413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.477434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.477442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.483169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.483190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.483197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.488791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.488812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.488820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.494472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.494493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.494500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.500210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.500237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.500245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.505938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.505959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.505966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.511552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.511573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.511584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.517042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.517063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.517071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.858 [2024-07-15 17:08:03.522749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:56.858 [2024-07-15 17:08:03.522770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.858 [2024-07-15 17:08:03.522778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.528394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.528415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.528423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.533996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.534017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.534025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.540264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.540284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.540291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.546378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.546399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.546407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.551981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.552002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.552011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.557416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.557439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.557447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.562917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.562942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.562949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.569128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.569150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.569158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.576179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.576202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.576210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.583827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.583848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.583856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.590103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.590124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.590132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.596310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.596332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.596340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.602484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.602505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.602513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.608792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.608812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.118 [2024-07-15 17:08:03.608820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.118 [2024-07-15 17:08:03.614904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.118 [2024-07-15 17:08:03.614926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.614934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.620941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.620961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.620968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.626863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.626884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.626892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.632807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.632829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.632837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.638673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.638694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.638701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.643994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.644014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.644022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.647403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.647423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.647430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.652954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.652975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.652982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.658452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.658472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.658480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.663972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.663992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.664003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.669541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.669561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.669569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.675273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.675294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.675302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.680775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.680798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.680806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.686471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.686492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.686500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.692193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.692214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.692222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.697749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.697770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.697777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.703329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.703350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.703357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.708926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.708946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.708954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.714317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.714340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.714348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.719849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.719870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.719877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.725359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.725380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.725387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.730866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.730886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.730894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.736549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.736570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.736577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.742105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.742126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.742134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.747636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.747657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.747665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.753353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.753373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.753381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.759036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.759056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.759063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.764703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.764723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.764731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.770313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.770333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.770341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.119 [2024-07-15 17:08:03.775980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.119 [2024-07-15 17:08:03.776000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.119 [2024-07-15 17:08:03.776007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.120 [2024-07-15 17:08:03.781733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.120 [2024-07-15 17:08:03.781754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.120 [2024-07-15 17:08:03.781761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.787508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.787529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.787537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.793209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.793235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.793243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.798829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.798848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.798855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.804575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.804595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.804603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.810312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.810333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.810344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.815796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.815816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.815823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.821321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.821341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.821349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.826906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.826926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.826934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.832554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.832574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.832582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.838157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.838177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.838185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.843751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.843771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.843778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.849252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.849271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.849279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.854846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.854867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.854875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.860513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.860537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.860545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.865946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.865966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.865973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.871331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.871351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.871358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.876790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.876810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.876818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.882358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.882378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.882386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.888053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.888073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.888080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.893660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.893680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.893688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.899217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.899242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.899249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.904814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.904834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.380 [2024-07-15 17:08:03.904842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.380 [2024-07-15 17:08:03.910471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.380 [2024-07-15 17:08:03.910491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.910499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.916269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.916289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.916296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.921944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.921965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.921972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.927600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.927620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.927628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.933239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.933259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.933266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.938997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.939017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.939025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.944638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.944658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.944665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.950175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.950195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.950203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.955771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.955792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.955803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.961425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.961446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.961454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.967051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.967072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.967079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.972610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.972631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.972638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.978160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.978180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.978188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.983765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.983785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.983794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.989540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.989561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.989568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:03.995220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:03.995246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:03.995254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:04.000803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:04.000823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:04.000831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:04.006402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:04.006423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:04.006431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:04.012116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:04.012136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:04.012144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:04.017780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:04.017800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:04.017808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:04.023448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:04.023469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:04.023477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:04.029060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:04.029080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:04.029088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:04.034736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:04.034756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:04.034764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:04.040547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:04.040567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:04.040575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.381 [2024-07-15 17:08:04.046333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.381 [2024-07-15 17:08:04.046352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.381 [2024-07-15 17:08:04.046360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.052052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.052073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.052084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.057885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.057905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.057913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.063638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.063658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.063666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.069311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.069331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.069338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.074993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.075012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.075020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.080709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.080729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.080737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.086288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.086308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.086316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.091796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.091815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.091823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.097345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.097364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.097371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.102912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.102935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.102943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.108380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.108400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.108408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.113961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.113981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.113989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.119449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.119469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.119476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.124914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.124934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.124942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.130467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.130487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.130495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.135992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.136013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.136020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.141494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.141515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.141523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.146981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.642 [2024-07-15 17:08:04.147001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.642 [2024-07-15 17:08:04.147009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.642 [2024-07-15 17:08:04.152477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.152497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.152505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.158027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.158046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.158054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.163655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.163675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.163683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.169151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.169171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.169178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.174649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.174670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.174677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.180167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.180187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.180195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.185659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.185679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.185687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.191264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.191284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.191293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.196862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.196882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.196893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.202435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.202454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.202462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.207929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.207949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.207957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.213502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.213522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.213530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.219223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.219248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.219256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.224939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.224959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.224967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.230492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.230513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.230521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.236056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.236077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.236085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.241699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.241719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.241727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.247307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.247331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.247339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.252908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.252928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.252936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.258447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.258468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.258476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.263966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.263986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.263994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.269548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.269569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.269576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.275143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.275162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.275169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.280630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.280651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.280658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.286102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.286122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.286130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.291587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.291607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.291614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.297170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.297191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.297199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.302857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.302877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.302885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.643 [2024-07-15 17:08:04.308511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.643 [2024-07-15 17:08:04.308532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.643 [2024-07-15 17:08:04.308540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.902 [2024-07-15 17:08:04.314043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.902 [2024-07-15 17:08:04.314064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.902 [2024-07-15 17:08:04.314072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.902 [2024-07-15 17:08:04.319597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.902 [2024-07-15 17:08:04.319616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.319624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.325172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.325192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.325200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.330714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.330735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.330743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.336285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.336305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.336312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.341731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.341752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.341762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.347287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.347306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.347314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.352871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.352891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.352898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.358549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.358568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.358575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.364220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.364245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.364253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.369681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.369701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.369709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.375170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.375190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.375198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.380713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.380733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.380742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.386295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.386315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.386323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.391828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.391852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.391860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.903 [2024-07-15 17:08:04.397425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe350b0) 00:25:57.903 [2024-07-15 17:08:04.397445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-07-15 17:08:04.397452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.903 00:25:57.903 Latency(us) 00:25:57.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.903 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:57.903 nvme0n1 : 2.00 4955.53 619.44 0.00 0.00 3225.67 666.05 13848.04 00:25:57.903 =================================================================================================================== 00:25:57.903 Total : 4955.53 619.44 0.00 0.00 3225.67 666.05 13848.04 00:25:57.903 0 00:25:57.903 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:57.903 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:57.903 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:57.903 | .driver_specific 00:25:57.903 | .nvme_error 00:25:57.903 | .status_code 00:25:57.903 | .command_transient_transport_error' 00:25:57.903 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 320 > 0 )) 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 226570 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 226570 ']' 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 226570 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 226570 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 226570' 00:25:58.162 killing process with pid 226570 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 226570 00:25:58.162 Received shutdown signal, test time was about 2.000000 seconds 00:25:58.162 00:25:58.162 Latency(us) 00:25:58.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.162 =================================================================================================================== 00:25:58.162 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 226570 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=227082 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 227082 /var/tmp/bperf.sock 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 227082 ']' 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:58.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.162 17:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:58.421 [2024-07-15 17:08:04.868739] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:25:58.421 [2024-07-15 17:08:04.868791] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227082 ] 00:25:58.421 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.421 [2024-07-15 17:08:04.923347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.421 [2024-07-15 17:08:05.003106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.358 17:08:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.358 17:08:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:59.358 17:08:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:59.358 17:08:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:59.358 17:08:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:59.358 17:08:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.358 17:08:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.358 17:08:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.358 17:08:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.358 17:08:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.617 nvme0n1 00:25:59.617 17:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:59.617 17:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.617 17:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.617 17:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.617 17:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:59.617 17:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:59.877 Running I/O for 2 seconds... 00:25:59.877 [2024-07-15 17:08:06.337033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.877 [2024-07-15 17:08:06.337216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.877 [2024-07-15 17:08:06.337250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.877 [2024-07-15 17:08:06.346658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.877 [2024-07-15 17:08:06.346829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.877 [2024-07-15 17:08:06.346852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.877 [2024-07-15 17:08:06.356248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.877 [2024-07-15 17:08:06.356439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.877 [2024-07-15 17:08:06.356459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.877 [2024-07-15 17:08:06.365820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.877 [2024-07-15 17:08:06.365988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.877 [2024-07-15 17:08:06.366007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.877 [2024-07-15 17:08:06.375374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.877 [2024-07-15 17:08:06.375543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.877 [2024-07-15 17:08:06.375562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.877 [2024-07-15 17:08:06.384945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.877 [2024-07-15 17:08:06.385131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.877 [2024-07-15 17:08:06.385150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.877 [2024-07-15 17:08:06.394520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.877 [2024-07-15 17:08:06.394687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.877 [2024-07-15 17:08:06.394705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.877 [2024-07-15 17:08:06.404154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.877 [2024-07-15 17:08:06.404343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.877 [2024-07-15 17:08:06.404362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.877 [2024-07-15 17:08:06.413684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.877 [2024-07-15 17:08:06.413853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.877 [2024-07-15 17:08:06.413872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.877 [2024-07-15 17:08:06.423176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.877 [2024-07-15 17:08:06.423363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.877 [2024-07-15 17:08:06.423381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.877 [2024-07-15 17:08:06.432739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.877 [2024-07-15 17:08:06.432920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.877 [2024-07-15 17:08:06.432938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.877 [2024-07-15 17:08:06.442266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.877 [2024-07-15 17:08:06.442430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.877 [2024-07-15 17:08:06.442450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.878 [2024-07-15 17:08:06.451793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.878 [2024-07-15 17:08:06.451955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.878 [2024-07-15 17:08:06.451973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.878 [2024-07-15 17:08:06.461317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.878 [2024-07-15 17:08:06.461500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.878 [2024-07-15 17:08:06.461519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.878 [2024-07-15 17:08:06.470877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.878 [2024-07-15 17:08:06.471059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.878 [2024-07-15 17:08:06.471077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.878 [2024-07-15 17:08:06.480415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.878 [2024-07-15 17:08:06.480581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.878 [2024-07-15 17:08:06.480599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.878 [2024-07-15 17:08:06.489887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.878 [2024-07-15 17:08:06.490052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.878 [2024-07-15 17:08:06.490070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.878 [2024-07-15 17:08:06.499472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.878 [2024-07-15 17:08:06.499654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.878 [2024-07-15 17:08:06.499672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.878 [2024-07-15 17:08:06.508985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.878 [2024-07-15 17:08:06.509148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.878 [2024-07-15 17:08:06.509165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.878 [2024-07-15 17:08:06.518484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.878 [2024-07-15 17:08:06.518676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.878 [2024-07-15 17:08:06.518694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.878 [2024-07-15 17:08:06.528021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.878 [2024-07-15 17:08:06.528184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.878 [2024-07-15 17:08:06.528203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.878 [2024-07-15 17:08:06.537490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:25:59.878 [2024-07-15 17:08:06.537653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.878 [2024-07-15 17:08:06.537670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.137 [2024-07-15 17:08:06.547188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.547378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.547395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.556856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.557021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.557038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.566356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.566519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.566536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.575890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.576074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.576094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.585550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.585732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.585751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.595250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.595434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.595451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.604947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.605130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.605148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.614627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.614808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.614825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.624288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.624469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.624487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.634049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.634217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.634238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.643785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.643951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.643969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.653530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.653696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.653713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.663130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.663323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.663341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.672635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.672798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.672815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.682364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.682554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.682571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.691853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.692018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.692035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.701323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.701505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.701521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.710830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.710993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.711010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.720401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.720565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.720582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.729885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.730066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.730083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.739386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.739547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.739564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.748830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.749017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.749034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.758348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.758508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.758525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.767788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.767952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.767968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.138 [2024-07-15 17:08:06.777315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.138 [2024-07-15 17:08:06.777502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.138 [2024-07-15 17:08:06.777519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.139 [2024-07-15 17:08:06.786861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.139 [2024-07-15 17:08:06.787025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.139 [2024-07-15 17:08:06.787042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.139 [2024-07-15 17:08:06.796348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.139 [2024-07-15 17:08:06.796529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.139 [2024-07-15 17:08:06.796547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.806027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.399 [2024-07-15 17:08:06.806191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.399 [2024-07-15 17:08:06.806209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.815686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.399 [2024-07-15 17:08:06.815849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.399 [2024-07-15 17:08:06.815866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.825382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.399 [2024-07-15 17:08:06.825575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.399 [2024-07-15 17:08:06.825595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.834787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.399 [2024-07-15 17:08:06.834949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.399 [2024-07-15 17:08:06.834966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.844266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.399 [2024-07-15 17:08:06.844448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.399 [2024-07-15 17:08:06.844466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.853787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.399 [2024-07-15 17:08:06.853964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.399 [2024-07-15 17:08:06.853981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.863400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.399 [2024-07-15 17:08:06.863583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.399 [2024-07-15 17:08:06.863601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.873103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.399 [2024-07-15 17:08:06.873272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.399 [2024-07-15 17:08:06.873290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.882761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.399 [2024-07-15 17:08:06.882923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.399 [2024-07-15 17:08:06.882940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.892261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.399 [2024-07-15 17:08:06.892443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.399 [2024-07-15 17:08:06.892460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.901805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.399 [2024-07-15 17:08:06.901967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.399 [2024-07-15 17:08:06.901983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.911279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.399 [2024-07-15 17:08:06.911452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.399 [2024-07-15 17:08:06.911468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.920823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.399 [2024-07-15 17:08:06.920992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.399 [2024-07-15 17:08:06.921009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.399 [2024-07-15 17:08:06.930319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:06.930481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:06.930498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:06.939772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:06.939934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:06.939951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:06.949349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:06.949516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:06.949533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:06.958842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:06.959004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:06.959021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:06.968327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:06.968508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:06.968525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:06.977856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:06.978016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:06.978034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:06.987319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:06.987483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:06.987500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:06.996814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:06.996998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:06.997015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:07.006352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:07.006514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:07.006531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:07.015787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:07.015948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:07.015966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:07.025335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:07.025510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:07.025527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:07.034764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:07.034928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:07.034944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:07.044179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:07.044380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:07.044398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:07.053705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:07.053870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:07.053887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.400 [2024-07-15 17:08:07.063246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.400 [2024-07-15 17:08:07.063409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.400 [2024-07-15 17:08:07.063426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.073055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.073244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.073268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.082618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.082783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.082800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.092042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.092205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.092222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.101778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.101941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.101958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.111248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.111416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.111434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.120982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.121149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.121167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.130606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.130786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.130804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.140271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.140440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.140457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.149808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.149991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.150009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.159314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.159484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.159501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.168806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.168987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.169004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.178330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.178505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.178522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.187763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.187946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.187963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.197277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.197456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.197474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.206795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.206959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.206976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.216262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.216448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.216465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.659 [2024-07-15 17:08:07.225795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.659 [2024-07-15 17:08:07.225955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.659 [2024-07-15 17:08:07.225972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.660 [2024-07-15 17:08:07.235306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.660 [2024-07-15 17:08:07.235470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.660 [2024-07-15 17:08:07.235486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.660 [2024-07-15 17:08:07.244807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.660 [2024-07-15 17:08:07.244989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.660 [2024-07-15 17:08:07.245006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.660 [2024-07-15 17:08:07.254315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.660 [2024-07-15 17:08:07.254477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.660 [2024-07-15 17:08:07.254494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.660 [2024-07-15 17:08:07.263761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.660 [2024-07-15 17:08:07.263924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.660 [2024-07-15 17:08:07.263941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.660 [2024-07-15 17:08:07.273291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.660 [2024-07-15 17:08:07.273472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.660 [2024-07-15 17:08:07.273500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.660 [2024-07-15 17:08:07.282777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.660 [2024-07-15 17:08:07.282938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.660 [2024-07-15 17:08:07.282955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.660 [2024-07-15 17:08:07.292296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.660 [2024-07-15 17:08:07.292479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.660 [2024-07-15 17:08:07.292496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.660 [2024-07-15 17:08:07.301832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.660 [2024-07-15 17:08:07.301995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.660 [2024-07-15 17:08:07.302011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.660 [2024-07-15 17:08:07.311285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.660 [2024-07-15 17:08:07.311451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.660 [2024-07-15 17:08:07.311468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.660 [2024-07-15 17:08:07.320794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.660 [2024-07-15 17:08:07.320960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.660 [2024-07-15 17:08:07.320978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.919 [2024-07-15 17:08:07.330550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.919 [2024-07-15 17:08:07.330715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.919 [2024-07-15 17:08:07.330733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.919 [2024-07-15 17:08:07.340132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.919 [2024-07-15 17:08:07.340310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.919 [2024-07-15 17:08:07.340328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.919 [2024-07-15 17:08:07.349778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.919 [2024-07-15 17:08:07.349942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.919 [2024-07-15 17:08:07.349961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.919 [2024-07-15 17:08:07.359259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.919 [2024-07-15 17:08:07.359424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.919 [2024-07-15 17:08:07.359442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.919 [2024-07-15 17:08:07.368746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.919 [2024-07-15 17:08:07.368929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.919 [2024-07-15 17:08:07.368947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.919 [2024-07-15 17:08:07.378442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.919 [2024-07-15 17:08:07.378624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.919 [2024-07-15 17:08:07.378642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.919 [2024-07-15 17:08:07.388071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.919 [2024-07-15 17:08:07.388252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.919 [2024-07-15 17:08:07.388270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.919 [2024-07-15 17:08:07.397576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.919 [2024-07-15 17:08:07.397739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.919 [2024-07-15 17:08:07.397756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.919 [2024-07-15 17:08:07.407071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.919 [2024-07-15 17:08:07.407235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.919 [2024-07-15 17:08:07.407255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.919 [2024-07-15 17:08:07.416541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.919 [2024-07-15 17:08:07.416718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.416735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.426067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.426234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.426251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.435580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.435744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.435761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.445114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.445304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.445322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.454630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.454793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.454810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.464050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.464213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.464236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.473596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.473758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.473775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.483002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.483165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.483182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.492500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.492687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.492704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.502050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.502210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.502232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.511516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.511678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.511695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.521060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.521231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.521249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.530543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.530703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.530720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.539991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.540150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.540168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.549575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.549739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.549756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.559126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.559295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.559312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.568616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.568798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.568816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:00.920 [2024-07-15 17:08:07.578124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:00.920 [2024-07-15 17:08:07.578291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.920 [2024-07-15 17:08:07.578309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.180 [2024-07-15 17:08:07.587758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.180 [2024-07-15 17:08:07.587926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.180 [2024-07-15 17:08:07.587944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.180 [2024-07-15 17:08:07.597391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.180 [2024-07-15 17:08:07.597582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.180 [2024-07-15 17:08:07.597600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.180 [2024-07-15 17:08:07.606920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.180 [2024-07-15 17:08:07.607084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.180 [2024-07-15 17:08:07.607101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.180 [2024-07-15 17:08:07.616402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.180 [2024-07-15 17:08:07.616583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.180 [2024-07-15 17:08:07.616600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.180 [2024-07-15 17:08:07.625955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.180 [2024-07-15 17:08:07.626130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.180 [2024-07-15 17:08:07.626148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.180 [2024-07-15 17:08:07.635726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.180 [2024-07-15 17:08:07.635893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.180 [2024-07-15 17:08:07.635911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.180 [2024-07-15 17:08:07.645473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.180 [2024-07-15 17:08:07.645641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.180 [2024-07-15 17:08:07.645659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.180 [2024-07-15 17:08:07.655220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.180 [2024-07-15 17:08:07.655392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.655412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.664958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.665124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.665142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.674652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.674834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.674852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.684522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.684707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.684726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.694161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.694337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.694355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.703816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.703977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.703994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.713515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.713680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.713699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.723256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.723421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.723438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.732992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.733157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.733174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.742737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.742905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.742922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.752477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.752644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.752660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.761973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.762135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.762152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.771459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.771640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.771658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.781042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.781207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.781229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.790555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.790718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.790735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.800059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.800219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.181 [2024-07-15 17:08:07.800241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.181 [2024-07-15 17:08:07.809604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.181 [2024-07-15 17:08:07.809787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.182 [2024-07-15 17:08:07.809804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.182 [2024-07-15 17:08:07.819142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.182 [2024-07-15 17:08:07.819333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.182 [2024-07-15 17:08:07.819350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.182 [2024-07-15 17:08:07.828711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.182 [2024-07-15 17:08:07.828874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.182 [2024-07-15 17:08:07.828890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.182 [2024-07-15 17:08:07.838159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.182 [2024-07-15 17:08:07.838348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.182 [2024-07-15 17:08:07.838366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.182 [2024-07-15 17:08:07.847801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.182 [2024-07-15 17:08:07.847966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.182 [2024-07-15 17:08:07.847984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.441 [2024-07-15 17:08:07.857534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.441 [2024-07-15 17:08:07.857716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.441 [2024-07-15 17:08:07.857733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.441 [2024-07-15 17:08:07.867079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.441 [2024-07-15 17:08:07.867242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.441 [2024-07-15 17:08:07.867259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:07.876618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:07.876799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:07.876816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:07.886367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:07.886559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:07.886577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:07.895993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:07.896174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:07.896191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:07.905739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:07.905901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:07.905921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:07.915237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:07.915420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:07.915437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:07.924767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:07.924930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:07.924946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:07.934249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:07.934413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:07.934429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:07.943773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:07.943958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:07.943975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:07.953367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:07.953530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:07.953547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:07.962886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:07.963066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:07.963083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:07.972409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:07.972592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:07.972609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:07.981912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:07.982076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:07.982094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:07.991421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:07.991586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:07.991603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:08.000940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:08.001125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:08.001143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:08.010479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:08.010642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:08.010658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:08.019938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:08.020102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:08.020118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:08.029437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:08.029602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:08.029619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:08.038883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:08.039046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:08.039063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:08.048385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:08.048575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:08.048592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:08.057885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:08.058048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:08.058064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:08.067392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:08.067575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:08.067592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:08.076938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:08.077104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:08.077120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:08.086386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:08.086555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:08.086572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:08.095853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:08.096035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:08.096051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.442 [2024-07-15 17:08:08.105412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.442 [2024-07-15 17:08:08.105579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.442 [2024-07-15 17:08:08.105596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.115248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.115430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.115447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.124786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.124946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.124963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.134234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.134407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.134425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.143917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.144084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.144102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.153575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.153754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.153775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.163157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.163347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.163365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.172677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.172859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.172875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.182165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.182349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.182366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.191689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.191850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.191867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.201143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.201314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.201331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.210708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.210889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.210907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.220189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.220359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.220376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.229704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.229865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.229882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.239206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.239392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.239413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.248707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.248866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.248883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.258229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.258411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.258429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.267747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.267907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.267924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.277272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.277433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.277450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.286765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.286947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.286964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.296275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.296457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.296474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.305792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.305955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.305971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.315302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.315491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.315508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 [2024-07-15 17:08:08.324834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f4d0) with pdu=0x2000190fef90 00:26:01.702 [2024-07-15 17:08:08.324996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.702 [2024-07-15 17:08:08.325013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.702 00:26:01.702 Latency(us) 00:26:01.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.702 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:01.702 nvme0n1 : 2.00 26697.24 104.29 0.00 0.00 4786.13 4188.61 9858.89 00:26:01.702 =================================================================================================================== 00:26:01.702 Total : 26697.24 104.29 0.00 0.00 4786.13 4188.61 9858.89 00:26:01.702 0 00:26:01.702 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:01.702 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:01.702 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:01.702 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:01.702 | .driver_specific 00:26:01.702 | .nvme_error 00:26:01.702 | .status_code 00:26:01.702 | .command_transient_transport_error' 00:26:01.961 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 209 > 0 )) 00:26:01.961 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 227082 00:26:01.961 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 227082 ']' 00:26:01.961 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 227082 00:26:01.961 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:01.961 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:01.961 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 227082 00:26:01.961 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:01.961 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:01.961 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 227082' 00:26:01.961 killing process with pid 227082 00:26:01.961 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 227082 00:26:01.961 Received shutdown signal, test time was about 2.000000 seconds 00:26:01.961 00:26:01.961 Latency(us) 00:26:01.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.961 =================================================================================================================== 00:26:01.961 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:01.961 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 227082 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=227754 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 227754 /var/tmp/bperf.sock 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 227754 ']' 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:02.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.220 17:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:02.220 [2024-07-15 17:08:08.802003] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:26:02.220 [2024-07-15 17:08:08.802051] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227754 ] 00:26:02.220 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:02.220 Zero copy mechanism will not be used. 00:26:02.220 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.220 [2024-07-15 17:08:08.856069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.479 [2024-07-15 17:08:08.936070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.046 17:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:03.046 17:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:03.046 17:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:03.046 17:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:03.305 17:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:03.305 17:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.305 17:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.305 17:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.305 17:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.305 17:08:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.565 nvme0n1 00:26:03.565 17:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:03.565 17:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.565 17:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.565 17:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.565 17:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:03.565 17:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:03.565 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:03.565 Zero copy mechanism will not be used. 00:26:03.565 Running I/O for 2 seconds... 00:26:03.565 [2024-07-15 17:08:10.197071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.565 [2024-07-15 17:08:10.197170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.565 [2024-07-15 17:08:10.197198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.565 [2024-07-15 17:08:10.206572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.565 [2024-07-15 17:08:10.206970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.565 [2024-07-15 17:08:10.206991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.565 [2024-07-15 17:08:10.215795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.565 [2024-07-15 17:08:10.216032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.565 [2024-07-15 17:08:10.216053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.565 [2024-07-15 17:08:10.224847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.565 [2024-07-15 17:08:10.225246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.565 [2024-07-15 17:08:10.225282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.825 [2024-07-15 17:08:10.233849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.825 [2024-07-15 17:08:10.234250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.825 [2024-07-15 17:08:10.234269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.825 [2024-07-15 17:08:10.242117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.825 [2024-07-15 17:08:10.242518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.825 [2024-07-15 17:08:10.242537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.825 [2024-07-15 17:08:10.250122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.825 [2024-07-15 17:08:10.250197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.825 [2024-07-15 17:08:10.250216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.825 [2024-07-15 17:08:10.257535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.825 [2024-07-15 17:08:10.257932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.825 [2024-07-15 17:08:10.257951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.825 [2024-07-15 17:08:10.264869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.825 [2024-07-15 17:08:10.265317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.825 [2024-07-15 17:08:10.265341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.825 [2024-07-15 17:08:10.272202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.825 [2024-07-15 17:08:10.272583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.825 [2024-07-15 17:08:10.272602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.825 [2024-07-15 17:08:10.281176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.825 [2024-07-15 17:08:10.281578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.825 [2024-07-15 17:08:10.281597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.825 [2024-07-15 17:08:10.290171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.825 [2024-07-15 17:08:10.290567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.825 [2024-07-15 17:08:10.290587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.825 [2024-07-15 17:08:10.298973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.825 [2024-07-15 17:08:10.299363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.825 [2024-07-15 17:08:10.299382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.825 [2024-07-15 17:08:10.307825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.825 [2024-07-15 17:08:10.308234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.825 [2024-07-15 17:08:10.308254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.825 [2024-07-15 17:08:10.316575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.825 [2024-07-15 17:08:10.316970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.316989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.326027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.326438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.326457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.335176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.335296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.335313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.344438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.344825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.344844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.353116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.353241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.353259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.361794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.362290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.362309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.370023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.370557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.370576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.377735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.378119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.378138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.385737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.386178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.386196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.394422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.394854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.394873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.402785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.403232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.403250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.411584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.411998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.412017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.419368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.419791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.419810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.428216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.428683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.428701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.434865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.435199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.435218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.441393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.441803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.441821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.447764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.448130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.448148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.453775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.454119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.454138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.461573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.462007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.462026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.469805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.470246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.470266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.477691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.478149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.478174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.826 [2024-07-15 17:08:10.485835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:03.826 [2024-07-15 17:08:10.486211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.826 [2024-07-15 17:08:10.486237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.493331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.493723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.493742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.501525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.501995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.502015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.510253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.510698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.510717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.519127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.519678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.519698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.526968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.527350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.527369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.533731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.534116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.534134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.540384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.540763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.540781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.547500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.547862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.547881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.554435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.554851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.554871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.563093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.563509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.563530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.569645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.570020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.570039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.576290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.576669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.576689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.582729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.583093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.583112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.588001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.588349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.588368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.592857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.593217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.593242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.597746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.598104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.598127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.602458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.602810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.602829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.607078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.607397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.087 [2024-07-15 17:08:10.607416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.087 [2024-07-15 17:08:10.611702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.087 [2024-07-15 17:08:10.612029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.612047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.616674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.616992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.617010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.621948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.622276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.622295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.627284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.627620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.627639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.632427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.632749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.632767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.637534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.637848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.637866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.642628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.642963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.642982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.648559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.648884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.648902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.653862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.654204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.654223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.659454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.659779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.659797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.665159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.665501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.665519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.670737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.671046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.671065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.675951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.676283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.676302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.681236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.681557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.681577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.686508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.686820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.686839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.691529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.691842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.691860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.696584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.696902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.696920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.701361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.701702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.701721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.706452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.706779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.706798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.711855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.712175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.712194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.717704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.718016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.718036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.723304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.723629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.723647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.729671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.729998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.730017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.735470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.735786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.735809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.741887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.742220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.742246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.088 [2024-07-15 17:08:10.748711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.088 [2024-07-15 17:08:10.749173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.088 [2024-07-15 17:08:10.749192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.756949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.757405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.757423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.765668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.766046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.766065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.773687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.774050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.774069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.781273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.781577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.781596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.788849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.789259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.789277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.796440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.796747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.796766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.805192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.805564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.805583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.813103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.813485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.813504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.820847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.821240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.821259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.828581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.828997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.829016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.836053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.836402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.836420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.843126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.843518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.843537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.850032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.850422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.850441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.857096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.857514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.857533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.864145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.864537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.864555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.870440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.870743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.870761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.877052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.877416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.877434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.884145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.884573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.884592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.891286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.891717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.891736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.898130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.898501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.898521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.905120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.905543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.905563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.911891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.912295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.912314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.919331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.349 [2024-07-15 17:08:10.919724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.349 [2024-07-15 17:08:10.919742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.349 [2024-07-15 17:08:10.926284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:10.926695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:10.926717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.350 [2024-07-15 17:08:10.933364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:10.933785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:10.933803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.350 [2024-07-15 17:08:10.940320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:10.940708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:10.940726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.350 [2024-07-15 17:08:10.947252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:10.947630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:10.947648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.350 [2024-07-15 17:08:10.954338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:10.954767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:10.954785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.350 [2024-07-15 17:08:10.961497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:10.961926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:10.961946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.350 [2024-07-15 17:08:10.968745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:10.969111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:10.969129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.350 [2024-07-15 17:08:10.975645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:10.976068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:10.976091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.350 [2024-07-15 17:08:10.983770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:10.984171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:10.984190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.350 [2024-07-15 17:08:10.990686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:10.991042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:10.991061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.350 [2024-07-15 17:08:10.996287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:10.996593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:10.996611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.350 [2024-07-15 17:08:11.001329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:11.001628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:11.001646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.350 [2024-07-15 17:08:11.006453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:11.006759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:11.006778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.350 [2024-07-15 17:08:11.012346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.350 [2024-07-15 17:08:11.012657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.350 [2024-07-15 17:08:11.012676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.018391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.018728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.018746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.024495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.024816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.024834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.030289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.030597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.030615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.035981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.036290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.036315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.042011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.042312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.042330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.047776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.048085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.048103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.053093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.053399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.053418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.058570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.058862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.058880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.063719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.064028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.064046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.068706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.069006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.069024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.073706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.073992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.074010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.078510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.078818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.078837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.083324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.083644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.083662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.088104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.088401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.088419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.092732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.093018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.093036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.097182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.097536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.097566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.101966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.102270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.102288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.107334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.107631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.107650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.112851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.113147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.113165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.611 [2024-07-15 17:08:11.118099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.611 [2024-07-15 17:08:11.118423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.611 [2024-07-15 17:08:11.118441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.123489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.123802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.123820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.128954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.129254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.129273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.134705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.135010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.135028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.140493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.140803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.140822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.146419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.146768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.146787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.152671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.152974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.152993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.157986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.158292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.158310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.162735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.163032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.163050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.167342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.167650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.167668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.171811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.172112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.172135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.176214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.176515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.176534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.180525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.180822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.180840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.184825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.185123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.185142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.189048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.189372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.189391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.193341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.193647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.193666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.197529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.197833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.197851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.201819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.202109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.202127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.206028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.206347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.206366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.210370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.210678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.210697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.215152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.215461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.215480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.219527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.219821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.219840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.223880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.224184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.224202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.228311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.228606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.228625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.233166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.233457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.233475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.238739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.239063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.239080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.244033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.244333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.244351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.249203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.249517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.249535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.254796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.255106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.612 [2024-07-15 17:08:11.255124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.612 [2024-07-15 17:08:11.260059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.612 [2024-07-15 17:08:11.260363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.613 [2024-07-15 17:08:11.260381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.613 [2024-07-15 17:08:11.266019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.613 [2024-07-15 17:08:11.266313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.613 [2024-07-15 17:08:11.266332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.613 [2024-07-15 17:08:11.271592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.613 [2024-07-15 17:08:11.271892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.613 [2024-07-15 17:08:11.271910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.613 [2024-07-15 17:08:11.277211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.613 [2024-07-15 17:08:11.277557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.613 [2024-07-15 17:08:11.277575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.873 [2024-07-15 17:08:11.282850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.873 [2024-07-15 17:08:11.283149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.873 [2024-07-15 17:08:11.283168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.873 [2024-07-15 17:08:11.288440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.873 [2024-07-15 17:08:11.288747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.873 [2024-07-15 17:08:11.288766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.873 [2024-07-15 17:08:11.294427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.294732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.294751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.300196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.300525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.300546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.305929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.306251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.306271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.311752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.312063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.312082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.316986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.317305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.317324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.321711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.322017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.322035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.326570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.326873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.326891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.331398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.331698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.331717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.336819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.337123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.337141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.341608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.341916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.341935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.346786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.347091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.347110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.351582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.351878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.351895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.356791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.357074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.357092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.361749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.362051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.362070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.366491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.366782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.366800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.371413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.371716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.371735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.376349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.376651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.376670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.381323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.381622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.381640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.386088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.386397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.386416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.390964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.391267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.391284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.395907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.396202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.396220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.400811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.401105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.401123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.405704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.406011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.406030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.410624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.410923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.410941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.415485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.415786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.415804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.420168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.420466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.420486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.425699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.425992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.426010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.430775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.431074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.431095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.435898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.874 [2024-07-15 17:08:11.436199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.874 [2024-07-15 17:08:11.436218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.874 [2024-07-15 17:08:11.440709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.440990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.441008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.444751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.445027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.445045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.448725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.449001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.449019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.453281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.453559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.453578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.457310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.457603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.457621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.461709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.461988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.462007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.465757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.466175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.466194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.470018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.470281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.470300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.473810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.474066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.474085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.477613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.477869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.477888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.481437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.481700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.481719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.485245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.485514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.485532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.488965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.489214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.489238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.492711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.492952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.492971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.496415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.496796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.496815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.500233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.500487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.500509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.504288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.504541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.504560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.507989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.508245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.508279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.511727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.511977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.511995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.515452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.515698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.515718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.519170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.519456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.519475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.523650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.523928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.523946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.528900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.529246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.529264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.534501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.534838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.534856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.875 [2024-07-15 17:08:11.539852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:04.875 [2024-07-15 17:08:11.540184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.875 [2024-07-15 17:08:11.540203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.545277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.545592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.545611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.550696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.551025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.551044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.557388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.557732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.557750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.563834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.564203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.564221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.570254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.570604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.570622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.580212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.580602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.580621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.588488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.588874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.588894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.596076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.596456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.596474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.603675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.604058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.604075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.610873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.611031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.611049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.620721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.621130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.621148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.627820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.628088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.628106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.634295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.634620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.634638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.640178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.640467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.640485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.647298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.647680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.647698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.653596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.653914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.653933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.658336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.658601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.658623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.662854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.663114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.663132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.666855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.667108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.667126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.670841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.671121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.136 [2024-07-15 17:08:11.671139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.136 [2024-07-15 17:08:11.677247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.136 [2024-07-15 17:08:11.677612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.677631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.683475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.683725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.683743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.688200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.688455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.688473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.694787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.695077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.695095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.701979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.702300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.702318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.709965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.710305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.710324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.717447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.717823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.717842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.725525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.725843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.725863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.732601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.732951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.732969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.739804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.740158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.740176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.746581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.746897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.746915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.753369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.753649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.753667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.759719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.760072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.760090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.767434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.767775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.767794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.775611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.775961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.775979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.784638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.784889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.784908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.793532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.793844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.793862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.137 [2024-07-15 17:08:11.800827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.137 [2024-07-15 17:08:11.801188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.137 [2024-07-15 17:08:11.801207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.808701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.809062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.809081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.816713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.817010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.817029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.824399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.824639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.824657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.832026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.832263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.832281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.840101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.840401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.840423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.847677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.847957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.847975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.855098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.855470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.855488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.862061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.862350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.862369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.866544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.866776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.866795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.870426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.870659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.870677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.874306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.874565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.874584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.878199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.878449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.878467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.882060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.882297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.882316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.885940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.886176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.886194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.889741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.889977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.889996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.893591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.893817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.426 [2024-07-15 17:08:11.893835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.426 [2024-07-15 17:08:11.897422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.426 [2024-07-15 17:08:11.897645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.897663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.901216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.901452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.901471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.905031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.905263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.905282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.908890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.909125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.909144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.912756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.912986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.913005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.916614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.916844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.916863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.920483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.920719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.920737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.924266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.924502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.924521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.928154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.928391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.928410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.931954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.932183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.932202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.935762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.935990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.936010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.939591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.939816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.939835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.943415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.943646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.943664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.947272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.947487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.947505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.951021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.951256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.951274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.954776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.954991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.955009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.958528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.958757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.958775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.962328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.962565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.962582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.966189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.966421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.966439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.970037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.970269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.970289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.973847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.974085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.974103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.977662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.977895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.977914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.981486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.981712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.981730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.985322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.985562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.985580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.989193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.989428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.989447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.992985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.993219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.993245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:11.996824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:11.997052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:11.997070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:12.000650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:12.000881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:12.000900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:12.004461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:12.004688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.427 [2024-07-15 17:08:12.004707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.427 [2024-07-15 17:08:12.008427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.427 [2024-07-15 17:08:12.008663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.008681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.012687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.012915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.012934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.016819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.017053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.017078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.020758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.020989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.021008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.025091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.025336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.025354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.030193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.030435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.030453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.034688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.034919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.034938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.039052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.039300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.039318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.043118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.043353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.043372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.047660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.047898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.047915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.052395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.052629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.052648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.057427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.057675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.057694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.062317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.062557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.062576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.066744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.066977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.066995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.071194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.071442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.071460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.076011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.076255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.076273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.080751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.080984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.081002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.085264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.085545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.085563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.428 [2024-07-15 17:08:12.089888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.428 [2024-07-15 17:08:12.090116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.428 [2024-07-15 17:08:12.090135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.686 [2024-07-15 17:08:12.093782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.686 [2024-07-15 17:08:12.094020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.686 [2024-07-15 17:08:12.094039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.686 [2024-07-15 17:08:12.097676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.686 [2024-07-15 17:08:12.097912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.686 [2024-07-15 17:08:12.097931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.686 [2024-07-15 17:08:12.101684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.686 [2024-07-15 17:08:12.101912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.686 [2024-07-15 17:08:12.101930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.686 [2024-07-15 17:08:12.106052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.686 [2024-07-15 17:08:12.106340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.686 [2024-07-15 17:08:12.106360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.686 [2024-07-15 17:08:12.111597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.686 [2024-07-15 17:08:12.111859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.686 [2024-07-15 17:08:12.111878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.686 [2024-07-15 17:08:12.115924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.686 [2024-07-15 17:08:12.116158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.686 [2024-07-15 17:08:12.116176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.686 [2024-07-15 17:08:12.120522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.686 [2024-07-15 17:08:12.120943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.686 [2024-07-15 17:08:12.120962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.686 [2024-07-15 17:08:12.125068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.686 [2024-07-15 17:08:12.125317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.686 [2024-07-15 17:08:12.125336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.686 [2024-07-15 17:08:12.129267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.686 [2024-07-15 17:08:12.129516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.686 [2024-07-15 17:08:12.129534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.686 [2024-07-15 17:08:12.133485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.687 [2024-07-15 17:08:12.133724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.687 [2024-07-15 17:08:12.133746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.687 [2024-07-15 17:08:12.137990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.687 [2024-07-15 17:08:12.138231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.687 [2024-07-15 17:08:12.138251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.687 [2024-07-15 17:08:12.142309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.687 [2024-07-15 17:08:12.142542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.687 [2024-07-15 17:08:12.142561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.687 [2024-07-15 17:08:12.146668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.687 [2024-07-15 17:08:12.146899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.687 [2024-07-15 17:08:12.146917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.687 [2024-07-15 17:08:12.150877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.687 [2024-07-15 17:08:12.151109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.687 [2024-07-15 17:08:12.151127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.687 [2024-07-15 17:08:12.154871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.687 [2024-07-15 17:08:12.155097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.687 [2024-07-15 17:08:12.155115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.687 [2024-07-15 17:08:12.159438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.687 [2024-07-15 17:08:12.159661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.687 [2024-07-15 17:08:12.159680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.687 [2024-07-15 17:08:12.164685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.687 [2024-07-15 17:08:12.164917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.687 [2024-07-15 17:08:12.164935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.687 [2024-07-15 17:08:12.169296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.687 [2024-07-15 17:08:12.169532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.687 [2024-07-15 17:08:12.169551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.687 [2024-07-15 17:08:12.173793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.687 [2024-07-15 17:08:12.174034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.687 [2024-07-15 17:08:12.174053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.687 [2024-07-15 17:08:12.178186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.687 [2024-07-15 17:08:12.178418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.687 [2024-07-15 17:08:12.178437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.687 [2024-07-15 17:08:12.182451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154f810) with pdu=0x2000190fef90 00:26:05.687 [2024-07-15 17:08:12.182682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.687 [2024-07-15 17:08:12.182701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.687 00:26:05.687 Latency(us) 00:26:05.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.687 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:05.687 nvme0n1 : 2.00 5402.15 675.27 0.00 0.00 2957.61 1780.87 14246.96 00:26:05.687 =================================================================================================================== 00:26:05.687 Total : 5402.15 675.27 0.00 0.00 2957.61 1780.87 14246.96 00:26:05.687 0 00:26:05.687 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:05.687 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:05.687 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:05.687 | .driver_specific 00:26:05.687 | .nvme_error 00:26:05.687 | .status_code 00:26:05.687 | .command_transient_transport_error' 00:26:05.687 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:05.945 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 348 > 0 )) 00:26:05.945 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 227754 00:26:05.945 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 227754 ']' 00:26:05.945 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 227754 00:26:05.945 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:05.945 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:05.945 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 227754 00:26:05.945 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:05.945 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:05.945 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 227754' 00:26:05.945 killing process with pid 227754 00:26:05.945 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 227754 00:26:05.945 Received shutdown signal, test time was about 2.000000 seconds 00:26:05.945 00:26:05.945 Latency(us) 00:26:05.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.945 =================================================================================================================== 00:26:05.945 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:05.945 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 227754 00:26:06.203 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 225633 00:26:06.203 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 225633 ']' 00:26:06.203 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 225633 00:26:06.203 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:06.203 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:06.203 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 225633 00:26:06.203 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:06.203 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:06.203 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 225633' 00:26:06.203 killing process with pid 225633 00:26:06.203 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 225633 00:26:06.203 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 225633 00:26:06.203 00:26:06.203 real 0m16.819s 00:26:06.203 user 0m32.261s 00:26:06.203 sys 0m4.463s 00:26:06.203 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:06.203 17:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.203 ************************************ 00:26:06.203 END TEST nvmf_digest_error 00:26:06.203 ************************************ 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:06.461 rmmod nvme_tcp 00:26:06.461 rmmod nvme_fabrics 00:26:06.461 rmmod nvme_keyring 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 225633 ']' 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 225633 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 225633 ']' 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 225633 00:26:06.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (225633) - No such process 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 225633 is not found' 00:26:06.461 Process with pid 225633 is not found 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:06.461 17:08:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.462 17:08:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:06.462 17:08:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.361 17:08:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:08.361 00:26:08.361 real 0m41.614s 00:26:08.361 user 1m6.663s 00:26:08.361 sys 0m12.993s 00:26:08.361 17:08:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:08.361 17:08:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:08.361 ************************************ 00:26:08.361 END TEST nvmf_digest 00:26:08.361 ************************************ 00:26:08.619 17:08:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:08.619 17:08:15 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:08.619 17:08:15 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:08.619 17:08:15 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:08.619 17:08:15 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:08.619 17:08:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:08.619 17:08:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:08.619 17:08:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:08.619 ************************************ 00:26:08.619 START TEST nvmf_bdevperf 00:26:08.619 ************************************ 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:08.619 * Looking for test storage... 00:26:08.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.619 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:08.620 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:08.620 17:08:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:08.620 17:08:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:13.897 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:13.897 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:13.897 Found net devices under 0000:86:00.0: cvl_0_0 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:13.897 Found net devices under 0000:86:00.1: cvl_0_1 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.897 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:13.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:26:13.897 00:26:13.898 --- 10.0.0.2 ping statistics --- 00:26:13.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.898 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:26:13.898 00:26:13.898 --- 10.0.0.1 ping statistics --- 00:26:13.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.898 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=231755 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 231755 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 231755 ']' 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.898 17:08:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:13.898 [2024-07-15 17:08:20.032473] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:26:13.898 [2024-07-15 17:08:20.032523] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.898 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.898 [2024-07-15 17:08:20.088864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:13.898 [2024-07-15 17:08:20.168738] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.898 [2024-07-15 17:08:20.168772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.898 [2024-07-15 17:08:20.168779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.898 [2024-07-15 17:08:20.168785] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.898 [2024-07-15 17:08:20.168790] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.898 [2024-07-15 17:08:20.168884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.898 [2024-07-15 17:08:20.168899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.898 [2024-07-15 17:08:20.168900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.470 [2024-07-15 17:08:20.869236] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.470 Malloc0 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.470 [2024-07-15 17:08:20.921946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:14.470 17:08:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:14.471 17:08:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:14.471 17:08:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:14.471 17:08:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:14.471 17:08:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:14.471 { 00:26:14.471 "params": { 00:26:14.471 "name": "Nvme$subsystem", 00:26:14.471 "trtype": "$TEST_TRANSPORT", 00:26:14.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.471 "adrfam": "ipv4", 00:26:14.471 "trsvcid": "$NVMF_PORT", 00:26:14.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.471 "hdgst": ${hdgst:-false}, 00:26:14.471 "ddgst": ${ddgst:-false} 00:26:14.471 }, 00:26:14.471 "method": "bdev_nvme_attach_controller" 00:26:14.471 } 00:26:14.471 EOF 00:26:14.471 )") 00:26:14.471 17:08:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:14.471 17:08:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:14.471 17:08:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:14.471 17:08:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:14.471 "params": { 00:26:14.471 "name": "Nvme1", 00:26:14.471 "trtype": "tcp", 00:26:14.471 "traddr": "10.0.0.2", 00:26:14.471 "adrfam": "ipv4", 00:26:14.471 "trsvcid": "4420", 00:26:14.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:14.471 "hdgst": false, 00:26:14.471 "ddgst": false 00:26:14.471 }, 00:26:14.471 "method": "bdev_nvme_attach_controller" 00:26:14.471 }' 00:26:14.471 [2024-07-15 17:08:20.970545] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:26:14.471 [2024-07-15 17:08:20.970593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232004 ] 00:26:14.471 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.471 [2024-07-15 17:08:21.025541] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.471 [2024-07-15 17:08:21.098367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.731 Running I/O for 1 seconds... 00:26:15.668 00:26:15.668 Latency(us) 00:26:15.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.668 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:15.668 Verification LBA range: start 0x0 length 0x4000 00:26:15.668 Nvme1n1 : 1.00 10862.54 42.43 0.00 0.00 11741.68 1524.42 16868.40 00:26:15.668 =================================================================================================================== 00:26:15.668 Total : 10862.54 42.43 0.00 0.00 11741.68 1524.42 16868.40 00:26:15.928 17:08:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=232231 00:26:15.928 17:08:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:15.928 17:08:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:15.928 17:08:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:15.928 17:08:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:15.928 17:08:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:15.928 17:08:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:15.928 17:08:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:15.928 { 00:26:15.928 "params": { 00:26:15.928 "name": "Nvme$subsystem", 00:26:15.928 "trtype": "$TEST_TRANSPORT", 00:26:15.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.928 "adrfam": "ipv4", 00:26:15.928 "trsvcid": "$NVMF_PORT", 00:26:15.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.928 "hdgst": ${hdgst:-false}, 00:26:15.928 "ddgst": ${ddgst:-false} 00:26:15.928 }, 00:26:15.928 "method": "bdev_nvme_attach_controller" 00:26:15.928 } 00:26:15.928 EOF 00:26:15.928 )") 00:26:15.928 17:08:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:15.928 17:08:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:15.928 17:08:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:15.928 17:08:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:15.928 "params": { 00:26:15.928 "name": "Nvme1", 00:26:15.928 "trtype": "tcp", 00:26:15.928 "traddr": "10.0.0.2", 00:26:15.928 "adrfam": "ipv4", 00:26:15.928 "trsvcid": "4420", 00:26:15.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:15.928 "hdgst": false, 00:26:15.928 "ddgst": false 00:26:15.928 }, 00:26:15.928 "method": "bdev_nvme_attach_controller" 00:26:15.928 }' 00:26:15.928 [2024-07-15 17:08:22.488714] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:26:15.928 [2024-07-15 17:08:22.488763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232231 ] 00:26:15.928 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.928 [2024-07-15 17:08:22.543850] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.187 [2024-07-15 17:08:22.614578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.445 Running I/O for 15 seconds... 00:26:19.050 17:08:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 231755 00:26:19.050 17:08:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:19.050 [2024-07-15 17:08:25.460490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.460986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.050 [2024-07-15 17:08:25.460994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.050 [2024-07-15 17:08:25.461001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.051 [2024-07-15 17:08:25.461613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.051 [2024-07-15 17:08:25.461621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.461987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.461994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.462003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.462009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.462020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.462028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.462036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.462043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.462051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.462057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.462066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.462072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.462080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.462091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.462099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.462105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.462113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.462120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.462129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.462135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.462143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.462150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.462158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.462164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.462172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.052 [2024-07-15 17:08:25.462179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.052 [2024-07-15 17:08:25.462187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.053 [2024-07-15 17:08:25.462193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.053 [2024-07-15 17:08:25.462208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.053 [2024-07-15 17:08:25.462223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.053 [2024-07-15 17:08:25.462564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.053 [2024-07-15 17:08:25.462578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93ac70 is same with the state(5) to be set 00:26:19.053 [2024-07-15 17:08:25.462597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.053 [2024-07-15 17:08:25.462602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.053 [2024-07-15 17:08:25.462608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87984 len:8 PRP1 0x0 PRP2 0x0 00:26:19.053 [2024-07-15 17:08:25.462615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462657] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x93ac70 was disconnected and freed. reset controller. 00:26:19.053 [2024-07-15 17:08:25.462703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.053 [2024-07-15 17:08:25.462712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.053 [2024-07-15 17:08:25.462726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.053 [2024-07-15 17:08:25.462739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.053 [2024-07-15 17:08:25.462752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.053 [2024-07-15 17:08:25.462758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.053 [2024-07-15 17:08:25.465572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.053 [2024-07-15 17:08:25.465595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.053 [2024-07-15 17:08:25.466252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-07-15 17:08:25.466268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.053 [2024-07-15 17:08:25.466275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.053 [2024-07-15 17:08:25.466454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.053 [2024-07-15 17:08:25.466631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.053 [2024-07-15 17:08:25.466639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.053 [2024-07-15 17:08:25.466646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.053 [2024-07-15 17:08:25.469501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.053 [2024-07-15 17:08:25.478862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.053 [2024-07-15 17:08:25.479345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-07-15 17:08:25.479391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.053 [2024-07-15 17:08:25.479421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.053 [2024-07-15 17:08:25.479934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.053 [2024-07-15 17:08:25.480098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.053 [2024-07-15 17:08:25.480105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.053 [2024-07-15 17:08:25.480112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.053 [2024-07-15 17:08:25.482813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.053 [2024-07-15 17:08:25.491761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.053 [2024-07-15 17:08:25.492240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-07-15 17:08:25.492285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.053 [2024-07-15 17:08:25.492307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.053 [2024-07-15 17:08:25.492887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.053 [2024-07-15 17:08:25.493103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.053 [2024-07-15 17:08:25.493111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.053 [2024-07-15 17:08:25.493117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.053 [2024-07-15 17:08:25.495815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.053 [2024-07-15 17:08:25.504674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.053 [2024-07-15 17:08:25.505051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-07-15 17:08:25.505068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.053 [2024-07-15 17:08:25.505075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.053 [2024-07-15 17:08:25.505253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.054 [2024-07-15 17:08:25.505425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.054 [2024-07-15 17:08:25.505433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.054 [2024-07-15 17:08:25.505439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.054 [2024-07-15 17:08:25.508123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.054 [2024-07-15 17:08:25.517582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.054 [2024-07-15 17:08:25.517951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-07-15 17:08:25.517992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.054 [2024-07-15 17:08:25.518013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.054 [2024-07-15 17:08:25.518608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.054 [2024-07-15 17:08:25.519162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.054 [2024-07-15 17:08:25.519174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.054 [2024-07-15 17:08:25.519180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.054 [2024-07-15 17:08:25.523015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.054 [2024-07-15 17:08:25.531296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.054 [2024-07-15 17:08:25.531683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-07-15 17:08:25.531725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.054 [2024-07-15 17:08:25.531747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.054 [2024-07-15 17:08:25.532266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.054 [2024-07-15 17:08:25.532439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.054 [2024-07-15 17:08:25.532447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.054 [2024-07-15 17:08:25.532453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.054 [2024-07-15 17:08:25.535174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.054 [2024-07-15 17:08:25.544099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.054 [2024-07-15 17:08:25.544562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-07-15 17:08:25.544604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.054 [2024-07-15 17:08:25.544626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.054 [2024-07-15 17:08:25.545073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.054 [2024-07-15 17:08:25.545251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.054 [2024-07-15 17:08:25.545267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.054 [2024-07-15 17:08:25.545273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.054 [2024-07-15 17:08:25.547946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.054 [2024-07-15 17:08:25.557023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.054 [2024-07-15 17:08:25.557357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-07-15 17:08:25.557372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.054 [2024-07-15 17:08:25.557378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.054 [2024-07-15 17:08:25.557541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.054 [2024-07-15 17:08:25.557703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.054 [2024-07-15 17:08:25.557710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.054 [2024-07-15 17:08:25.557716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.054 [2024-07-15 17:08:25.560396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.054 [2024-07-15 17:08:25.570074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.054 [2024-07-15 17:08:25.570523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-07-15 17:08:25.570538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.054 [2024-07-15 17:08:25.570545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.054 [2024-07-15 17:08:25.570722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.054 [2024-07-15 17:08:25.570899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.054 [2024-07-15 17:08:25.570907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.054 [2024-07-15 17:08:25.570913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.054 [2024-07-15 17:08:25.573755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.054 [2024-07-15 17:08:25.583163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.054 [2024-07-15 17:08:25.583643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-07-15 17:08:25.583659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.054 [2024-07-15 17:08:25.583666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.054 [2024-07-15 17:08:25.583843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.054 [2024-07-15 17:08:25.584020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.054 [2024-07-15 17:08:25.584027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.054 [2024-07-15 17:08:25.584034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.054 [2024-07-15 17:08:25.586867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.054 [2024-07-15 17:08:25.596252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.054 [2024-07-15 17:08:25.596656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-07-15 17:08:25.596698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.054 [2024-07-15 17:08:25.596719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.054 [2024-07-15 17:08:25.597199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.054 [2024-07-15 17:08:25.597384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.054 [2024-07-15 17:08:25.597392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.054 [2024-07-15 17:08:25.597398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.054 [2024-07-15 17:08:25.600234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.054 [2024-07-15 17:08:25.609452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.054 [2024-07-15 17:08:25.609915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-07-15 17:08:25.609931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.054 [2024-07-15 17:08:25.609938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.054 [2024-07-15 17:08:25.610118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.055 [2024-07-15 17:08:25.610302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.055 [2024-07-15 17:08:25.610311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.055 [2024-07-15 17:08:25.610317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.055 [2024-07-15 17:08:25.613150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.055 [2024-07-15 17:08:25.622529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.055 [2024-07-15 17:08:25.623009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-07-15 17:08:25.623051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.055 [2024-07-15 17:08:25.623072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.055 [2024-07-15 17:08:25.623291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.055 [2024-07-15 17:08:25.623470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.055 [2024-07-15 17:08:25.623478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.055 [2024-07-15 17:08:25.623485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.055 [2024-07-15 17:08:25.626319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.055 [2024-07-15 17:08:25.635684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.055 [2024-07-15 17:08:25.636147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-07-15 17:08:25.636163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.055 [2024-07-15 17:08:25.636169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.055 [2024-07-15 17:08:25.636352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.055 [2024-07-15 17:08:25.636530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.055 [2024-07-15 17:08:25.636538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.055 [2024-07-15 17:08:25.636544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.055 [2024-07-15 17:08:25.639381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.055 [2024-07-15 17:08:25.648746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.055 [2024-07-15 17:08:25.649187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-07-15 17:08:25.649203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.055 [2024-07-15 17:08:25.649210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.055 [2024-07-15 17:08:25.649393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.055 [2024-07-15 17:08:25.649570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.055 [2024-07-15 17:08:25.649578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.055 [2024-07-15 17:08:25.649588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.055 [2024-07-15 17:08:25.652422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.055 [2024-07-15 17:08:25.661707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.055 [2024-07-15 17:08:25.662194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-07-15 17:08:25.662210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.055 [2024-07-15 17:08:25.662216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.055 [2024-07-15 17:08:25.662400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.055 [2024-07-15 17:08:25.662578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.055 [2024-07-15 17:08:25.662586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.055 [2024-07-15 17:08:25.662592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.055 [2024-07-15 17:08:25.665427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.055 [2024-07-15 17:08:25.674802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.055 [2024-07-15 17:08:25.675199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-07-15 17:08:25.675215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.055 [2024-07-15 17:08:25.675222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.055 [2024-07-15 17:08:25.675405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.055 [2024-07-15 17:08:25.675582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.055 [2024-07-15 17:08:25.675590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.055 [2024-07-15 17:08:25.675596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.055 [2024-07-15 17:08:25.678442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.055 [2024-07-15 17:08:25.687808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.055 [2024-07-15 17:08:25.688268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-07-15 17:08:25.688286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.055 [2024-07-15 17:08:25.688293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.055 [2024-07-15 17:08:25.688471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.055 [2024-07-15 17:08:25.688648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.055 [2024-07-15 17:08:25.688656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.055 [2024-07-15 17:08:25.688663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.055 [2024-07-15 17:08:25.691512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.055 [2024-07-15 17:08:25.700814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.055 [2024-07-15 17:08:25.701305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-07-15 17:08:25.701356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.055 [2024-07-15 17:08:25.701378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.055 [2024-07-15 17:08:25.701781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.055 [2024-07-15 17:08:25.702035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.055 [2024-07-15 17:08:25.702046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.055 [2024-07-15 17:08:25.702054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.055 [2024-07-15 17:08:25.706125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.055 [2024-07-15 17:08:25.714141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.055 [2024-07-15 17:08:25.714518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-07-15 17:08:25.714534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.055 [2024-07-15 17:08:25.714540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.055 [2024-07-15 17:08:25.714718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.317 [2024-07-15 17:08:25.714894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.317 [2024-07-15 17:08:25.714902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.317 [2024-07-15 17:08:25.714909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.317 [2024-07-15 17:08:25.717745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.317 [2024-07-15 17:08:25.727293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.317 [2024-07-15 17:08:25.727773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.317 [2024-07-15 17:08:25.727816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.317 [2024-07-15 17:08:25.727838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.317 [2024-07-15 17:08:25.728433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.317 [2024-07-15 17:08:25.728629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.317 [2024-07-15 17:08:25.728637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.317 [2024-07-15 17:08:25.728643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.317 [2024-07-15 17:08:25.731478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.317 [2024-07-15 17:08:25.740347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.317 [2024-07-15 17:08:25.740769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.317 [2024-07-15 17:08:25.740816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.317 [2024-07-15 17:08:25.740838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.317 [2024-07-15 17:08:25.741433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.317 [2024-07-15 17:08:25.741639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.317 [2024-07-15 17:08:25.741647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.317 [2024-07-15 17:08:25.741653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.317 [2024-07-15 17:08:25.744489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.317 [2024-07-15 17:08:25.753529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.317 [2024-07-15 17:08:25.754011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.317 [2024-07-15 17:08:25.754053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.317 [2024-07-15 17:08:25.754075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.317 [2024-07-15 17:08:25.754601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.317 [2024-07-15 17:08:25.754779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.317 [2024-07-15 17:08:25.754787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.317 [2024-07-15 17:08:25.754794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.317 [2024-07-15 17:08:25.757633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.317 [2024-07-15 17:08:25.766726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.317 [2024-07-15 17:08:25.767196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.317 [2024-07-15 17:08:25.767259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.317 [2024-07-15 17:08:25.767283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.317 [2024-07-15 17:08:25.767831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.317 [2024-07-15 17:08:25.768008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.317 [2024-07-15 17:08:25.768016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.317 [2024-07-15 17:08:25.768023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.317 [2024-07-15 17:08:25.770862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.317 [2024-07-15 17:08:25.779837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.317 [2024-07-15 17:08:25.780299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.317 [2024-07-15 17:08:25.780314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.317 [2024-07-15 17:08:25.780321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.317 [2024-07-15 17:08:25.780497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.318 [2024-07-15 17:08:25.780674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.318 [2024-07-15 17:08:25.780682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.318 [2024-07-15 17:08:25.780688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.318 [2024-07-15 17:08:25.783526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.318 [2024-07-15 17:08:25.792885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.318 [2024-07-15 17:08:25.793299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.318 [2024-07-15 17:08:25.793340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.318 [2024-07-15 17:08:25.793363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.318 [2024-07-15 17:08:25.793943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.318 [2024-07-15 17:08:25.794136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.318 [2024-07-15 17:08:25.794144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.318 [2024-07-15 17:08:25.794150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.318 [2024-07-15 17:08:25.796996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.318 [2024-07-15 17:08:25.806012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.318 [2024-07-15 17:08:25.806486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.318 [2024-07-15 17:08:25.806502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.318 [2024-07-15 17:08:25.806509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.318 [2024-07-15 17:08:25.806686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.318 [2024-07-15 17:08:25.806863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.318 [2024-07-15 17:08:25.806870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.318 [2024-07-15 17:08:25.806877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.318 [2024-07-15 17:08:25.809717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.318 [2024-07-15 17:08:25.819095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.318 [2024-07-15 17:08:25.819568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.318 [2024-07-15 17:08:25.819584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.318 [2024-07-15 17:08:25.819591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.318 [2024-07-15 17:08:25.819767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.318 [2024-07-15 17:08:25.819944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.318 [2024-07-15 17:08:25.819952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.318 [2024-07-15 17:08:25.819958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.318 [2024-07-15 17:08:25.822799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.318 [2024-07-15 17:08:25.832149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.318 [2024-07-15 17:08:25.832540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.318 [2024-07-15 17:08:25.832556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.318 [2024-07-15 17:08:25.832567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.318 [2024-07-15 17:08:25.832745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.318 [2024-07-15 17:08:25.832924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.318 [2024-07-15 17:08:25.832932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.318 [2024-07-15 17:08:25.832939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.318 [2024-07-15 17:08:25.835781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.318 [2024-07-15 17:08:25.845325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.318 [2024-07-15 17:08:25.845602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.318 [2024-07-15 17:08:25.845617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.318 [2024-07-15 17:08:25.845624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.318 [2024-07-15 17:08:25.845801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.318 [2024-07-15 17:08:25.845977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.318 [2024-07-15 17:08:25.845985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.318 [2024-07-15 17:08:25.845991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.318 [2024-07-15 17:08:25.848822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.318 [2024-07-15 17:08:25.858333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.318 [2024-07-15 17:08:25.858734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.318 [2024-07-15 17:08:25.858749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.318 [2024-07-15 17:08:25.858756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.318 [2024-07-15 17:08:25.858928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.318 [2024-07-15 17:08:25.859099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.318 [2024-07-15 17:08:25.859107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.318 [2024-07-15 17:08:25.859113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.318 [2024-07-15 17:08:25.861928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.318 [2024-07-15 17:08:25.871497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.318 [2024-07-15 17:08:25.871838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.318 [2024-07-15 17:08:25.871854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.318 [2024-07-15 17:08:25.871860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.318 [2024-07-15 17:08:25.872037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.318 [2024-07-15 17:08:25.872215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.318 [2024-07-15 17:08:25.872231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.318 [2024-07-15 17:08:25.872237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.318 [2024-07-15 17:08:25.875075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.318 [2024-07-15 17:08:25.884623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.318 [2024-07-15 17:08:25.885016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.318 [2024-07-15 17:08:25.885032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.318 [2024-07-15 17:08:25.885039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.318 [2024-07-15 17:08:25.885216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.318 [2024-07-15 17:08:25.885397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.318 [2024-07-15 17:08:25.885405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.318 [2024-07-15 17:08:25.885411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.318 [2024-07-15 17:08:25.888250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.318 [2024-07-15 17:08:25.897786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.318 [2024-07-15 17:08:25.898289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.318 [2024-07-15 17:08:25.898332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.318 [2024-07-15 17:08:25.898353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.318 [2024-07-15 17:08:25.898865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.318 [2024-07-15 17:08:25.899044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.318 [2024-07-15 17:08:25.899052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.318 [2024-07-15 17:08:25.899058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.318 [2024-07-15 17:08:25.901907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.318 [2024-07-15 17:08:25.910846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.318 [2024-07-15 17:08:25.911329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.318 [2024-07-15 17:08:25.911370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.318 [2024-07-15 17:08:25.911391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.318 [2024-07-15 17:08:25.911955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.319 [2024-07-15 17:08:25.912132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.319 [2024-07-15 17:08:25.912140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.319 [2024-07-15 17:08:25.912146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.319 [2024-07-15 17:08:25.914987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.319 [2024-07-15 17:08:25.924023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.319 [2024-07-15 17:08:25.924516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.319 [2024-07-15 17:08:25.924557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.319 [2024-07-15 17:08:25.924578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.319 [2024-07-15 17:08:25.925007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.319 [2024-07-15 17:08:25.925185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.319 [2024-07-15 17:08:25.925192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.319 [2024-07-15 17:08:25.925199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.319 [2024-07-15 17:08:25.928040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.319 [2024-07-15 17:08:25.937045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.319 [2024-07-15 17:08:25.937516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.319 [2024-07-15 17:08:25.937532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.319 [2024-07-15 17:08:25.937539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.319 [2024-07-15 17:08:25.937717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.319 [2024-07-15 17:08:25.937897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.319 [2024-07-15 17:08:25.937904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.319 [2024-07-15 17:08:25.937910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.319 [2024-07-15 17:08:25.940755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.319 [2024-07-15 17:08:25.950145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.319 [2024-07-15 17:08:25.950629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.319 [2024-07-15 17:08:25.950644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.319 [2024-07-15 17:08:25.950651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.319 [2024-07-15 17:08:25.950828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.319 [2024-07-15 17:08:25.951006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.319 [2024-07-15 17:08:25.951014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.319 [2024-07-15 17:08:25.951020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.319 [2024-07-15 17:08:25.953895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.319 [2024-07-15 17:08:25.963221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.319 [2024-07-15 17:08:25.963692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.319 [2024-07-15 17:08:25.963708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.319 [2024-07-15 17:08:25.963714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.319 [2024-07-15 17:08:25.963897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.319 [2024-07-15 17:08:25.964074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.319 [2024-07-15 17:08:25.964082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.319 [2024-07-15 17:08:25.964088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.319 [2024-07-15 17:08:25.966928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.319 [2024-07-15 17:08:25.976442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.319 [2024-07-15 17:08:25.976838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.319 [2024-07-15 17:08:25.976879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.319 [2024-07-15 17:08:25.976901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.319 [2024-07-15 17:08:25.977371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.319 [2024-07-15 17:08:25.977549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.319 [2024-07-15 17:08:25.977557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.319 [2024-07-15 17:08:25.977563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.319 [2024-07-15 17:08:25.980393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.580 [2024-07-15 17:08:25.989595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.580 [2024-07-15 17:08:25.989976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.580 [2024-07-15 17:08:25.990015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.580 [2024-07-15 17:08:25.990037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.580 [2024-07-15 17:08:25.990568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.580 [2024-07-15 17:08:25.990746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.580 [2024-07-15 17:08:25.990753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.580 [2024-07-15 17:08:25.990760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.580 [2024-07-15 17:08:25.993595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.580 [2024-07-15 17:08:26.002510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.580 [2024-07-15 17:08:26.002974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.580 [2024-07-15 17:08:26.003016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.580 [2024-07-15 17:08:26.003037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.580 [2024-07-15 17:08:26.003468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.580 [2024-07-15 17:08:26.003641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.580 [2024-07-15 17:08:26.003648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.580 [2024-07-15 17:08:26.003658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.580 [2024-07-15 17:08:26.006345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.580 [2024-07-15 17:08:26.015359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.580 [2024-07-15 17:08:26.015735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.580 [2024-07-15 17:08:26.015750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.580 [2024-07-15 17:08:26.015756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.580 [2024-07-15 17:08:26.015919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.580 [2024-07-15 17:08:26.016081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.580 [2024-07-15 17:08:26.016088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.580 [2024-07-15 17:08:26.016094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.580 [2024-07-15 17:08:26.018887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.580 [2024-07-15 17:08:26.028182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.580 [2024-07-15 17:08:26.028637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.580 [2024-07-15 17:08:26.028652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.580 [2024-07-15 17:08:26.028659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.580 [2024-07-15 17:08:26.028830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.580 [2024-07-15 17:08:26.029001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.580 [2024-07-15 17:08:26.029009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.580 [2024-07-15 17:08:26.029015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.580 [2024-07-15 17:08:26.031701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.580 [2024-07-15 17:08:26.040999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.580 [2024-07-15 17:08:26.041335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.580 [2024-07-15 17:08:26.041350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.580 [2024-07-15 17:08:26.041356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.580 [2024-07-15 17:08:26.041518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.580 [2024-07-15 17:08:26.041681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.580 [2024-07-15 17:08:26.041688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.580 [2024-07-15 17:08:26.041694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.580 [2024-07-15 17:08:26.044297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.580 [2024-07-15 17:08:26.053805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.580 [2024-07-15 17:08:26.054275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.580 [2024-07-15 17:08:26.054325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.580 [2024-07-15 17:08:26.054348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.580 [2024-07-15 17:08:26.054927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.580 [2024-07-15 17:08:26.055523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.580 [2024-07-15 17:08:26.055549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.580 [2024-07-15 17:08:26.055568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.580 [2024-07-15 17:08:26.059687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.580 [2024-07-15 17:08:26.067521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.580 [2024-07-15 17:08:26.068016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.580 [2024-07-15 17:08:26.068058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.580 [2024-07-15 17:08:26.068079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.580 [2024-07-15 17:08:26.068542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.580 [2024-07-15 17:08:26.068715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.580 [2024-07-15 17:08:26.068723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.580 [2024-07-15 17:08:26.068729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.580 [2024-07-15 17:08:26.071492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.580 [2024-07-15 17:08:26.080309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.580 [2024-07-15 17:08:26.080739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.580 [2024-07-15 17:08:26.080779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.580 [2024-07-15 17:08:26.080801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.580 [2024-07-15 17:08:26.081353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.580 [2024-07-15 17:08:26.081518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.580 [2024-07-15 17:08:26.081525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.580 [2024-07-15 17:08:26.081531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.580 [2024-07-15 17:08:26.084133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.580 [2024-07-15 17:08:26.093129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.580 [2024-07-15 17:08:26.093619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.580 [2024-07-15 17:08:26.093662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.580 [2024-07-15 17:08:26.093686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.580 [2024-07-15 17:08:26.094281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.580 [2024-07-15 17:08:26.094778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.580 [2024-07-15 17:08:26.094786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.580 [2024-07-15 17:08:26.094793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.580 [2024-07-15 17:08:26.097487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.580 [2024-07-15 17:08:26.106036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.580 [2024-07-15 17:08:26.106439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.580 [2024-07-15 17:08:26.106481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.580 [2024-07-15 17:08:26.106503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.580 [2024-07-15 17:08:26.107082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.580 [2024-07-15 17:08:26.107672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.580 [2024-07-15 17:08:26.107697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.580 [2024-07-15 17:08:26.107720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.580 [2024-07-15 17:08:26.110407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.580 [2024-07-15 17:08:26.118940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.580 [2024-07-15 17:08:26.119328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.580 [2024-07-15 17:08:26.119343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.580 [2024-07-15 17:08:26.119351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.581 [2024-07-15 17:08:26.119526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.581 [2024-07-15 17:08:26.119690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.581 [2024-07-15 17:08:26.119698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.581 [2024-07-15 17:08:26.119704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.581 [2024-07-15 17:08:26.122418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.581 [2024-07-15 17:08:26.132149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.581 [2024-07-15 17:08:26.132551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.581 [2024-07-15 17:08:26.132567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.581 [2024-07-15 17:08:26.132575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.581 [2024-07-15 17:08:26.132751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.581 [2024-07-15 17:08:26.132929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.581 [2024-07-15 17:08:26.132937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.581 [2024-07-15 17:08:26.132944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.581 [2024-07-15 17:08:26.135707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.581 [2024-07-15 17:08:26.145119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.581 [2024-07-15 17:08:26.145475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.581 [2024-07-15 17:08:26.145491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.581 [2024-07-15 17:08:26.145498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.581 [2024-07-15 17:08:26.145669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.581 [2024-07-15 17:08:26.145841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.581 [2024-07-15 17:08:26.145849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.581 [2024-07-15 17:08:26.145855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.581 [2024-07-15 17:08:26.148610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.581 [2024-07-15 17:08:26.158101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.581 [2024-07-15 17:08:26.158589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.581 [2024-07-15 17:08:26.158630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.581 [2024-07-15 17:08:26.158652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.581 [2024-07-15 17:08:26.159244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.581 [2024-07-15 17:08:26.159752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.581 [2024-07-15 17:08:26.159760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.581 [2024-07-15 17:08:26.159766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.581 [2024-07-15 17:08:26.162472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.581 [2024-07-15 17:08:26.171074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.581 [2024-07-15 17:08:26.171511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.581 [2024-07-15 17:08:26.171527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.581 [2024-07-15 17:08:26.171533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.581 [2024-07-15 17:08:26.171705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.581 [2024-07-15 17:08:26.171876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.581 [2024-07-15 17:08:26.171884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.581 [2024-07-15 17:08:26.171890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.581 [2024-07-15 17:08:26.174705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.581 [2024-07-15 17:08:26.184044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.581 [2024-07-15 17:08:26.184434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.581 [2024-07-15 17:08:26.184451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.581 [2024-07-15 17:08:26.184462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.581 [2024-07-15 17:08:26.184634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.581 [2024-07-15 17:08:26.184807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.581 [2024-07-15 17:08:26.184814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.581 [2024-07-15 17:08:26.184820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.581 [2024-07-15 17:08:26.187574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.581 [2024-07-15 17:08:26.197149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.581 [2024-07-15 17:08:26.197543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.581 [2024-07-15 17:08:26.197559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.581 [2024-07-15 17:08:26.197566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.581 [2024-07-15 17:08:26.197743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.581 [2024-07-15 17:08:26.197921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.581 [2024-07-15 17:08:26.197929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.581 [2024-07-15 17:08:26.197935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.581 [2024-07-15 17:08:26.200788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.581 [2024-07-15 17:08:26.209956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.581 [2024-07-15 17:08:26.210254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.581 [2024-07-15 17:08:26.210270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.581 [2024-07-15 17:08:26.210277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.581 [2024-07-15 17:08:26.210458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.581 [2024-07-15 17:08:26.210621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.581 [2024-07-15 17:08:26.210629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.581 [2024-07-15 17:08:26.210636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.581 [2024-07-15 17:08:26.213305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.581 [2024-07-15 17:08:26.222923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.581 [2024-07-15 17:08:26.223270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.581 [2024-07-15 17:08:26.223287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.581 [2024-07-15 17:08:26.223294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.581 [2024-07-15 17:08:26.223472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.581 [2024-07-15 17:08:26.223655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.581 [2024-07-15 17:08:26.223666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.581 [2024-07-15 17:08:26.223672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.581 [2024-07-15 17:08:26.226363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.581 [2024-07-15 17:08:26.235839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.581 [2024-07-15 17:08:26.236168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.581 [2024-07-15 17:08:26.236184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.581 [2024-07-15 17:08:26.236190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.581 [2024-07-15 17:08:26.236369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.581 [2024-07-15 17:08:26.236541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.581 [2024-07-15 17:08:26.236549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.581 [2024-07-15 17:08:26.236555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.581 [2024-07-15 17:08:26.239245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.842 [2024-07-15 17:08:26.249033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.842 [2024-07-15 17:08:26.249487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.842 [2024-07-15 17:08:26.249503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.842 [2024-07-15 17:08:26.249509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.842 [2024-07-15 17:08:26.249680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.842 [2024-07-15 17:08:26.249852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.842 [2024-07-15 17:08:26.249859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.842 [2024-07-15 17:08:26.249866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.842 [2024-07-15 17:08:26.252682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.842 [2024-07-15 17:08:26.262139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.842 [2024-07-15 17:08:26.262470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.842 [2024-07-15 17:08:26.262512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.842 [2024-07-15 17:08:26.262533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.842 [2024-07-15 17:08:26.263104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.843 [2024-07-15 17:08:26.263287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.843 [2024-07-15 17:08:26.263295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.843 [2024-07-15 17:08:26.263303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.843 [2024-07-15 17:08:26.266052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.843 [2024-07-15 17:08:26.275164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.843 [2024-07-15 17:08:26.275558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.843 [2024-07-15 17:08:26.275573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.843 [2024-07-15 17:08:26.275580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.843 [2024-07-15 17:08:26.275751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.843 [2024-07-15 17:08:26.275929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.843 [2024-07-15 17:08:26.275936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.843 [2024-07-15 17:08:26.275942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.843 [2024-07-15 17:08:26.278635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.843 [2024-07-15 17:08:26.288052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.843 [2024-07-15 17:08:26.288431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.843 [2024-07-15 17:08:26.288446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.843 [2024-07-15 17:08:26.288453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.843 [2024-07-15 17:08:26.288625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.843 [2024-07-15 17:08:26.288799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.843 [2024-07-15 17:08:26.288809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.843 [2024-07-15 17:08:26.288814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.843 [2024-07-15 17:08:26.291509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.843 [2024-07-15 17:08:26.300857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.843 [2024-07-15 17:08:26.301319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.843 [2024-07-15 17:08:26.301335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.843 [2024-07-15 17:08:26.301342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.843 [2024-07-15 17:08:26.301514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.843 [2024-07-15 17:08:26.301686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.843 [2024-07-15 17:08:26.301694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.843 [2024-07-15 17:08:26.301700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.843 [2024-07-15 17:08:26.304432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.843 [2024-07-15 17:08:26.313831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.843 [2024-07-15 17:08:26.314302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.843 [2024-07-15 17:08:26.314345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.843 [2024-07-15 17:08:26.314367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.843 [2024-07-15 17:08:26.314947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.843 [2024-07-15 17:08:26.315119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.843 [2024-07-15 17:08:26.315126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.843 [2024-07-15 17:08:26.315132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.843 [2024-07-15 17:08:26.317827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.843 [2024-07-15 17:08:26.326667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.843 [2024-07-15 17:08:26.327138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.843 [2024-07-15 17:08:26.327155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.843 [2024-07-15 17:08:26.327161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.843 [2024-07-15 17:08:26.327340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.843 [2024-07-15 17:08:26.327513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.843 [2024-07-15 17:08:26.327521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.843 [2024-07-15 17:08:26.327527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.843 [2024-07-15 17:08:26.330218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.843 [2024-07-15 17:08:26.339551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.843 [2024-07-15 17:08:26.340018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.843 [2024-07-15 17:08:26.340058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.843 [2024-07-15 17:08:26.340079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.843 [2024-07-15 17:08:26.340673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.843 [2024-07-15 17:08:26.341215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.843 [2024-07-15 17:08:26.341223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.843 [2024-07-15 17:08:26.341234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.843 [2024-07-15 17:08:26.343923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.843 [2024-07-15 17:08:26.352486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.843 [2024-07-15 17:08:26.352946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.843 [2024-07-15 17:08:26.352962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.843 [2024-07-15 17:08:26.352968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.843 [2024-07-15 17:08:26.353139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.843 [2024-07-15 17:08:26.353317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.843 [2024-07-15 17:08:26.353325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.843 [2024-07-15 17:08:26.353335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.843 [2024-07-15 17:08:26.356006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.843 [2024-07-15 17:08:26.365339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.844 [2024-07-15 17:08:26.365670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.844 [2024-07-15 17:08:26.365685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.844 [2024-07-15 17:08:26.365692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.844 [2024-07-15 17:08:26.365864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.844 [2024-07-15 17:08:26.366035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.844 [2024-07-15 17:08:26.366042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.844 [2024-07-15 17:08:26.366048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.844 [2024-07-15 17:08:26.368787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.844 [2024-07-15 17:08:26.378308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.844 [2024-07-15 17:08:26.378703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.844 [2024-07-15 17:08:26.378719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.844 [2024-07-15 17:08:26.378725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.844 [2024-07-15 17:08:26.378897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.844 [2024-07-15 17:08:26.379068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.844 [2024-07-15 17:08:26.379076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.844 [2024-07-15 17:08:26.379082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.844 [2024-07-15 17:08:26.381921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.844 [2024-07-15 17:08:26.391331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.844 [2024-07-15 17:08:26.391715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.844 [2024-07-15 17:08:26.391731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.844 [2024-07-15 17:08:26.391737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.844 [2024-07-15 17:08:26.391909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.844 [2024-07-15 17:08:26.392081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.844 [2024-07-15 17:08:26.392089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.844 [2024-07-15 17:08:26.392095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.844 [2024-07-15 17:08:26.394859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.844 [2024-07-15 17:08:26.404281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.844 [2024-07-15 17:08:26.404665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.844 [2024-07-15 17:08:26.404684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.844 [2024-07-15 17:08:26.404690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.844 [2024-07-15 17:08:26.404862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.844 [2024-07-15 17:08:26.405037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.844 [2024-07-15 17:08:26.405045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.844 [2024-07-15 17:08:26.405051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.844 [2024-07-15 17:08:26.407750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.844 [2024-07-15 17:08:26.417220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.844 [2024-07-15 17:08:26.417675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.844 [2024-07-15 17:08:26.417690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.844 [2024-07-15 17:08:26.417697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.844 [2024-07-15 17:08:26.417868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.844 [2024-07-15 17:08:26.418039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.844 [2024-07-15 17:08:26.418047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.844 [2024-07-15 17:08:26.418053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.844 [2024-07-15 17:08:26.420747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.844 [2024-07-15 17:08:26.430032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.844 [2024-07-15 17:08:26.430980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.844 [2024-07-15 17:08:26.431002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.844 [2024-07-15 17:08:26.431010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.844 [2024-07-15 17:08:26.431190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.844 [2024-07-15 17:08:26.431370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.844 [2024-07-15 17:08:26.431379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.844 [2024-07-15 17:08:26.431385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.844 [2024-07-15 17:08:26.434072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.844 [2024-07-15 17:08:26.443017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.844 [2024-07-15 17:08:26.443460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.844 [2024-07-15 17:08:26.443476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.844 [2024-07-15 17:08:26.443483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.844 [2024-07-15 17:08:26.443655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.844 [2024-07-15 17:08:26.443830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.844 [2024-07-15 17:08:26.443839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.844 [2024-07-15 17:08:26.443845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.844 [2024-07-15 17:08:26.446555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.844 [2024-07-15 17:08:26.455869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.844 [2024-07-15 17:08:26.456339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.844 [2024-07-15 17:08:26.456356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.844 [2024-07-15 17:08:26.456363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.844 [2024-07-15 17:08:26.456535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.845 [2024-07-15 17:08:26.456706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.845 [2024-07-15 17:08:26.456714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.845 [2024-07-15 17:08:26.456720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.845 [2024-07-15 17:08:26.459424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.845 [2024-07-15 17:08:26.468738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.845 [2024-07-15 17:08:26.469235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.845 [2024-07-15 17:08:26.469250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.845 [2024-07-15 17:08:26.469256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.845 [2024-07-15 17:08:26.469428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.845 [2024-07-15 17:08:26.469600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.845 [2024-07-15 17:08:26.469608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.845 [2024-07-15 17:08:26.469614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.845 [2024-07-15 17:08:26.472385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.845 [2024-07-15 17:08:26.481847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.845 [2024-07-15 17:08:26.482331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.845 [2024-07-15 17:08:26.482348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.845 [2024-07-15 17:08:26.482355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.845 [2024-07-15 17:08:26.482527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.845 [2024-07-15 17:08:26.482700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.845 [2024-07-15 17:08:26.482708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.845 [2024-07-15 17:08:26.482714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.845 [2024-07-15 17:08:26.485509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.845 [2024-07-15 17:08:26.494783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.845 [2024-07-15 17:08:26.495242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.845 [2024-07-15 17:08:26.495286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.845 [2024-07-15 17:08:26.495308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.845 [2024-07-15 17:08:26.495886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.845 [2024-07-15 17:08:26.496498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.845 [2024-07-15 17:08:26.496524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.845 [2024-07-15 17:08:26.496544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.845 [2024-07-15 17:08:26.499280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.845 [2024-07-15 17:08:26.507863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.845 [2024-07-15 17:08:26.508330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.845 [2024-07-15 17:08:26.508373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:19.845 [2024-07-15 17:08:26.508394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:19.845 [2024-07-15 17:08:26.508972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:19.845 [2024-07-15 17:08:26.509566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.845 [2024-07-15 17:08:26.509591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.845 [2024-07-15 17:08:26.509612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.104 [2024-07-15 17:08:26.513705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.104 [2024-07-15 17:08:26.521366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.104 [2024-07-15 17:08:26.521750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.104 [2024-07-15 17:08:26.521792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.104 [2024-07-15 17:08:26.521814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.104 [2024-07-15 17:08:26.522355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.104 [2024-07-15 17:08:26.522528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.104 [2024-07-15 17:08:26.522536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.104 [2024-07-15 17:08:26.522542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.104 [2024-07-15 17:08:26.525328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.104 [2024-07-15 17:08:26.534268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.104 [2024-07-15 17:08:26.534681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.104 [2024-07-15 17:08:26.534724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.104 [2024-07-15 17:08:26.534752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.104 [2024-07-15 17:08:26.535342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.104 [2024-07-15 17:08:26.535810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.104 [2024-07-15 17:08:26.535818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.104 [2024-07-15 17:08:26.535824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.104 [2024-07-15 17:08:26.538516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.104 [2024-07-15 17:08:26.547062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.104 [2024-07-15 17:08:26.547397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.104 [2024-07-15 17:08:26.547412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.104 [2024-07-15 17:08:26.547419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.104 [2024-07-15 17:08:26.547590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.104 [2024-07-15 17:08:26.547761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.104 [2024-07-15 17:08:26.547768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.104 [2024-07-15 17:08:26.547775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.104 [2024-07-15 17:08:26.550468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.104 [2024-07-15 17:08:26.559944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.104 [2024-07-15 17:08:26.560345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.104 [2024-07-15 17:08:26.560363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.104 [2024-07-15 17:08:26.560369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.104 [2024-07-15 17:08:26.560544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.104 [2024-07-15 17:08:26.560707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.104 [2024-07-15 17:08:26.560714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.104 [2024-07-15 17:08:26.560720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.104 [2024-07-15 17:08:26.563397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.104 [2024-07-15 17:08:26.572790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.104 [2024-07-15 17:08:26.573251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.104 [2024-07-15 17:08:26.573293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.104 [2024-07-15 17:08:26.573314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.104 [2024-07-15 17:08:26.573841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.104 [2024-07-15 17:08:26.574013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.104 [2024-07-15 17:08:26.574024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.104 [2024-07-15 17:08:26.574030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.104 [2024-07-15 17:08:26.576793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.104 [2024-07-15 17:08:26.585635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.104 [2024-07-15 17:08:26.586088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.104 [2024-07-15 17:08:26.586122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.104 [2024-07-15 17:08:26.586145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.104 [2024-07-15 17:08:26.586685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.104 [2024-07-15 17:08:26.586858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.104 [2024-07-15 17:08:26.586866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.104 [2024-07-15 17:08:26.586872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.104 [2024-07-15 17:08:26.589560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.104 [2024-07-15 17:08:26.598573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.104 [2024-07-15 17:08:26.599001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.104 [2024-07-15 17:08:26.599016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.104 [2024-07-15 17:08:26.599022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.104 [2024-07-15 17:08:26.599185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.105 [2024-07-15 17:08:26.599376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.105 [2024-07-15 17:08:26.599384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.105 [2024-07-15 17:08:26.599391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.105 [2024-07-15 17:08:26.602074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.105 [2024-07-15 17:08:26.611387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.105 [2024-07-15 17:08:26.611872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.105 [2024-07-15 17:08:26.611913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.105 [2024-07-15 17:08:26.611934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.105 [2024-07-15 17:08:26.612529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.105 [2024-07-15 17:08:26.612764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.105 [2024-07-15 17:08:26.612772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.105 [2024-07-15 17:08:26.612779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.105 [2024-07-15 17:08:26.615465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.105 [2024-07-15 17:08:26.624312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.105 [2024-07-15 17:08:26.624774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.105 [2024-07-15 17:08:26.624788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.105 [2024-07-15 17:08:26.624795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.105 [2024-07-15 17:08:26.624967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.105 [2024-07-15 17:08:26.625138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.105 [2024-07-15 17:08:26.625146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.105 [2024-07-15 17:08:26.625152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.105 [2024-07-15 17:08:26.627847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.105 [2024-07-15 17:08:26.637153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.105 [2024-07-15 17:08:26.637629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.105 [2024-07-15 17:08:26.637671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.105 [2024-07-15 17:08:26.637693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.105 [2024-07-15 17:08:26.638172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.105 [2024-07-15 17:08:26.638350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.105 [2024-07-15 17:08:26.638358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.105 [2024-07-15 17:08:26.638364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.105 [2024-07-15 17:08:26.641212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.105 [2024-07-15 17:08:26.650193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.105 [2024-07-15 17:08:26.650634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.105 [2024-07-15 17:08:26.650671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.105 [2024-07-15 17:08:26.650694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.105 [2024-07-15 17:08:26.651220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.105 [2024-07-15 17:08:26.651399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.105 [2024-07-15 17:08:26.651408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.105 [2024-07-15 17:08:26.651413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.105 [2024-07-15 17:08:26.654136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.105 [2024-07-15 17:08:26.663236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.105 [2024-07-15 17:08:26.663668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.105 [2024-07-15 17:08:26.663684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.105 [2024-07-15 17:08:26.663691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.105 [2024-07-15 17:08:26.663865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.105 [2024-07-15 17:08:26.664037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.105 [2024-07-15 17:08:26.664044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.105 [2024-07-15 17:08:26.664050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.105 [2024-07-15 17:08:26.666805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.105 [2024-07-15 17:08:26.676102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.105 [2024-07-15 17:08:26.676559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.105 [2024-07-15 17:08:26.676601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.105 [2024-07-15 17:08:26.676622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.105 [2024-07-15 17:08:26.677106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.105 [2024-07-15 17:08:26.677284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.105 [2024-07-15 17:08:26.677293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.105 [2024-07-15 17:08:26.677299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.105 [2024-07-15 17:08:26.679980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.105 [2024-07-15 17:08:26.689005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.105 [2024-07-15 17:08:26.689438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.105 [2024-07-15 17:08:26.689453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.105 [2024-07-15 17:08:26.689460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.105 [2024-07-15 17:08:26.689623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.105 [2024-07-15 17:08:26.689786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.105 [2024-07-15 17:08:26.689793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.105 [2024-07-15 17:08:26.689799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.105 [2024-07-15 17:08:26.692403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.105 [2024-07-15 17:08:26.701927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.105 [2024-07-15 17:08:26.702358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.105 [2024-07-15 17:08:26.702374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.105 [2024-07-15 17:08:26.702380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.105 [2024-07-15 17:08:26.702543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.105 [2024-07-15 17:08:26.702705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.105 [2024-07-15 17:08:26.702712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.105 [2024-07-15 17:08:26.702721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.105 [2024-07-15 17:08:26.705400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.105 [2024-07-15 17:08:26.714862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.105 [2024-07-15 17:08:26.715242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.105 [2024-07-15 17:08:26.715286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.105 [2024-07-15 17:08:26.715308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.105 [2024-07-15 17:08:26.715887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.105 [2024-07-15 17:08:26.716122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.105 [2024-07-15 17:08:26.716130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.105 [2024-07-15 17:08:26.716137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.105 [2024-07-15 17:08:26.718826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.105 [2024-07-15 17:08:26.727669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.105 [2024-07-15 17:08:26.728114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.105 [2024-07-15 17:08:26.728167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.105 [2024-07-15 17:08:26.728188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.105 [2024-07-15 17:08:26.728783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.105 [2024-07-15 17:08:26.729246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.105 [2024-07-15 17:08:26.729254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.105 [2024-07-15 17:08:26.729260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.105 [2024-07-15 17:08:26.731943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.105 [2024-07-15 17:08:26.740475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.105 [2024-07-15 17:08:26.740882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.105 [2024-07-15 17:08:26.740924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.105 [2024-07-15 17:08:26.740946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.105 [2024-07-15 17:08:26.741443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.106 [2024-07-15 17:08:26.741616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.106 [2024-07-15 17:08:26.741623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.106 [2024-07-15 17:08:26.741629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.106 [2024-07-15 17:08:26.744315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.106 [2024-07-15 17:08:26.753323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.106 [2024-07-15 17:08:26.753791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.106 [2024-07-15 17:08:26.753839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.106 [2024-07-15 17:08:26.753861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.106 [2024-07-15 17:08:26.754278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.106 [2024-07-15 17:08:26.754451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.106 [2024-07-15 17:08:26.754459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.106 [2024-07-15 17:08:26.754465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.106 [2024-07-15 17:08:26.757247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.106 [2024-07-15 17:08:26.766250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.106 [2024-07-15 17:08:26.766715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.106 [2024-07-15 17:08:26.766757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.106 [2024-07-15 17:08:26.766778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.106 [2024-07-15 17:08:26.767315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.106 [2024-07-15 17:08:26.767489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.106 [2024-07-15 17:08:26.767497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.106 [2024-07-15 17:08:26.767502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.106 [2024-07-15 17:08:26.770359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.365 [2024-07-15 17:08:26.779236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.365 [2024-07-15 17:08:26.779683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.365 [2024-07-15 17:08:26.779698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.365 [2024-07-15 17:08:26.779705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.365 [2024-07-15 17:08:26.779876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.365 [2024-07-15 17:08:26.780048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.365 [2024-07-15 17:08:26.780055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.365 [2024-07-15 17:08:26.780061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.365 [2024-07-15 17:08:26.782752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.365 [2024-07-15 17:08:26.792029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.365 [2024-07-15 17:08:26.792459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.365 [2024-07-15 17:08:26.792475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.365 [2024-07-15 17:08:26.792482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.365 [2024-07-15 17:08:26.792654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.365 [2024-07-15 17:08:26.792828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.365 [2024-07-15 17:08:26.792836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.365 [2024-07-15 17:08:26.792842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.365 [2024-07-15 17:08:26.795533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.365 [2024-07-15 17:08:26.804838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.365 [2024-07-15 17:08:26.805300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.365 [2024-07-15 17:08:26.805344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.365 [2024-07-15 17:08:26.805366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.365 [2024-07-15 17:08:26.805945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.365 [2024-07-15 17:08:26.806466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.365 [2024-07-15 17:08:26.806474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.365 [2024-07-15 17:08:26.806481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.365 [2024-07-15 17:08:26.809164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.365 [2024-07-15 17:08:26.817630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.365 [2024-07-15 17:08:26.818072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.365 [2024-07-15 17:08:26.818114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.365 [2024-07-15 17:08:26.818136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.365 [2024-07-15 17:08:26.818586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.365 [2024-07-15 17:08:26.818757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.365 [2024-07-15 17:08:26.818764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.365 [2024-07-15 17:08:26.818770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.365 [2024-07-15 17:08:26.821512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.365 [2024-07-15 17:08:26.830688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.365 [2024-07-15 17:08:26.831150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.365 [2024-07-15 17:08:26.831192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.365 [2024-07-15 17:08:26.831213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.365 [2024-07-15 17:08:26.831804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.365 [2024-07-15 17:08:26.832405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.365 [2024-07-15 17:08:26.832417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.365 [2024-07-15 17:08:26.832426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.365 [2024-07-15 17:08:26.836496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.365 [2024-07-15 17:08:26.844437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.365 [2024-07-15 17:08:26.844889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.365 [2024-07-15 17:08:26.844904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.365 [2024-07-15 17:08:26.844911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.365 [2024-07-15 17:08:26.845082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.365 [2024-07-15 17:08:26.845259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.365 [2024-07-15 17:08:26.845268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.365 [2024-07-15 17:08:26.845274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.365 [2024-07-15 17:08:26.847997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.365 [2024-07-15 17:08:26.857292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.365 [2024-07-15 17:08:26.857740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.365 [2024-07-15 17:08:26.857756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.365 [2024-07-15 17:08:26.857762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.365 [2024-07-15 17:08:26.857934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.365 [2024-07-15 17:08:26.858106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.365 [2024-07-15 17:08:26.858114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.365 [2024-07-15 17:08:26.858120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.365 [2024-07-15 17:08:26.860809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.365 [2024-07-15 17:08:26.870137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.365 [2024-07-15 17:08:26.870589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.365 [2024-07-15 17:08:26.870625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.365 [2024-07-15 17:08:26.870648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.365 [2024-07-15 17:08:26.871238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.365 [2024-07-15 17:08:26.871735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.365 [2024-07-15 17:08:26.871743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.365 [2024-07-15 17:08:26.871749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.365 [2024-07-15 17:08:26.874471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.365 [2024-07-15 17:08:26.883050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.365 [2024-07-15 17:08:26.883499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.365 [2024-07-15 17:08:26.883515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.365 [2024-07-15 17:08:26.883525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.365 [2024-07-15 17:08:26.883697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.365 [2024-07-15 17:08:26.883869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.365 [2024-07-15 17:08:26.883876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.365 [2024-07-15 17:08:26.883882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.365 [2024-07-15 17:08:26.886574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.365 [2024-07-15 17:08:26.895961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.365 [2024-07-15 17:08:26.896394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.365 [2024-07-15 17:08:26.896436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.365 [2024-07-15 17:08:26.896457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.365 [2024-07-15 17:08:26.896889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.365 [2024-07-15 17:08:26.897061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.365 [2024-07-15 17:08:26.897069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.365 [2024-07-15 17:08:26.897076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.365 [2024-07-15 17:08:26.899926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.365 [2024-07-15 17:08:26.908876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.365 [2024-07-15 17:08:26.909264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.366 [2024-07-15 17:08:26.909280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.366 [2024-07-15 17:08:26.909286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.366 [2024-07-15 17:08:26.909458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.366 [2024-07-15 17:08:26.909630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.366 [2024-07-15 17:08:26.909637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.366 [2024-07-15 17:08:26.909643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.366 [2024-07-15 17:08:26.912330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.366 [2024-07-15 17:08:26.921808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.366 [2024-07-15 17:08:26.922272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.366 [2024-07-15 17:08:26.922314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.366 [2024-07-15 17:08:26.922336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.366 [2024-07-15 17:08:26.922687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.366 [2024-07-15 17:08:26.922850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.366 [2024-07-15 17:08:26.922860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.366 [2024-07-15 17:08:26.922866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.366 [2024-07-15 17:08:26.925645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.366 [2024-07-15 17:08:26.934641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.366 [2024-07-15 17:08:26.935089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.366 [2024-07-15 17:08:26.935105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.366 [2024-07-15 17:08:26.935111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.366 [2024-07-15 17:08:26.935289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.366 [2024-07-15 17:08:26.935462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.366 [2024-07-15 17:08:26.935469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.366 [2024-07-15 17:08:26.935475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.366 [2024-07-15 17:08:26.938162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.366 [2024-07-15 17:08:26.947559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.366 [2024-07-15 17:08:26.948035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.366 [2024-07-15 17:08:26.948078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.366 [2024-07-15 17:08:26.948099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.366 [2024-07-15 17:08:26.948614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.366 [2024-07-15 17:08:26.948787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.366 [2024-07-15 17:08:26.948794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.366 [2024-07-15 17:08:26.948800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.366 [2024-07-15 17:08:26.951485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.366 [2024-07-15 17:08:26.960473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.366 [2024-07-15 17:08:26.960925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.366 [2024-07-15 17:08:26.960966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.366 [2024-07-15 17:08:26.960988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.366 [2024-07-15 17:08:26.961453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.366 [2024-07-15 17:08:26.961626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.366 [2024-07-15 17:08:26.961634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.366 [2024-07-15 17:08:26.961640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.366 [2024-07-15 17:08:26.964324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.366 [2024-07-15 17:08:26.973301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.366 [2024-07-15 17:08:26.973721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.366 [2024-07-15 17:08:26.973736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.366 [2024-07-15 17:08:26.973742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.366 [2024-07-15 17:08:26.973903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.366 [2024-07-15 17:08:26.974065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.366 [2024-07-15 17:08:26.974072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.366 [2024-07-15 17:08:26.974078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.366 [2024-07-15 17:08:26.976902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.366 [2024-07-15 17:08:26.986363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.366 [2024-07-15 17:08:26.986806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.366 [2024-07-15 17:08:26.986848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.366 [2024-07-15 17:08:26.986870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.366 [2024-07-15 17:08:26.987464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.366 [2024-07-15 17:08:26.987664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.366 [2024-07-15 17:08:26.987671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.366 [2024-07-15 17:08:26.987678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.366 [2024-07-15 17:08:26.990385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.366 [2024-07-15 17:08:26.999273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.366 [2024-07-15 17:08:26.999678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.366 [2024-07-15 17:08:26.999693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.366 [2024-07-15 17:08:26.999699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.366 [2024-07-15 17:08:26.999862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.366 [2024-07-15 17:08:27.000024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.366 [2024-07-15 17:08:27.000032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.366 [2024-07-15 17:08:27.000037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.366 [2024-07-15 17:08:27.002640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.366 [2024-07-15 17:08:27.012149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.366 [2024-07-15 17:08:27.012591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.366 [2024-07-15 17:08:27.012607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.366 [2024-07-15 17:08:27.012613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.366 [2024-07-15 17:08:27.012788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.366 [2024-07-15 17:08:27.012959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.366 [2024-07-15 17:08:27.012967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.366 [2024-07-15 17:08:27.012973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.366 [2024-07-15 17:08:27.015662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.366 [2024-07-15 17:08:27.025144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.366 [2024-07-15 17:08:27.025580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.366 [2024-07-15 17:08:27.025595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.366 [2024-07-15 17:08:27.025602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.366 [2024-07-15 17:08:27.025773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.366 [2024-07-15 17:08:27.025945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.366 [2024-07-15 17:08:27.025952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.366 [2024-07-15 17:08:27.025958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.366 [2024-07-15 17:08:27.028735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.626 [2024-07-15 17:08:27.038188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.626 [2024-07-15 17:08:27.038646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.626 [2024-07-15 17:08:27.038661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.626 [2024-07-15 17:08:27.038668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.626 [2024-07-15 17:08:27.038839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.626 [2024-07-15 17:08:27.039011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.626 [2024-07-15 17:08:27.039019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.626 [2024-07-15 17:08:27.039025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.626 [2024-07-15 17:08:27.041756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.626 [2024-07-15 17:08:27.051051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.626 [2024-07-15 17:08:27.051508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.626 [2024-07-15 17:08:27.051550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.626 [2024-07-15 17:08:27.051572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.626 [2024-07-15 17:08:27.051981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.626 [2024-07-15 17:08:27.052153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.626 [2024-07-15 17:08:27.052161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.626 [2024-07-15 17:08:27.052172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.626 [2024-07-15 17:08:27.054863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.626 [2024-07-15 17:08:27.063904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.626 [2024-07-15 17:08:27.064350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.626 [2024-07-15 17:08:27.064366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.626 [2024-07-15 17:08:27.064373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.626 [2024-07-15 17:08:27.064545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.626 [2024-07-15 17:08:27.064717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.626 [2024-07-15 17:08:27.064724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.626 [2024-07-15 17:08:27.064730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.626 [2024-07-15 17:08:27.067423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.626 [2024-07-15 17:08:27.076869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.626 [2024-07-15 17:08:27.077317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.626 [2024-07-15 17:08:27.077350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.626 [2024-07-15 17:08:27.077373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.626 [2024-07-15 17:08:27.077958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.626 [2024-07-15 17:08:27.078120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.626 [2024-07-15 17:08:27.078128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.626 [2024-07-15 17:08:27.078134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.626 [2024-07-15 17:08:27.080831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.626 [2024-07-15 17:08:27.089685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.626 [2024-07-15 17:08:27.090120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.626 [2024-07-15 17:08:27.090155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.626 [2024-07-15 17:08:27.090178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.626 [2024-07-15 17:08:27.090723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.626 [2024-07-15 17:08:27.090897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.626 [2024-07-15 17:08:27.090905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.626 [2024-07-15 17:08:27.090910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.626 [2024-07-15 17:08:27.093597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.626 [2024-07-15 17:08:27.102546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.626 [2024-07-15 17:08:27.102950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.626 [2024-07-15 17:08:27.102969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.626 [2024-07-15 17:08:27.102976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.626 [2024-07-15 17:08:27.103138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.626 [2024-07-15 17:08:27.103306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.626 [2024-07-15 17:08:27.103315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.626 [2024-07-15 17:08:27.103320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.626 [2024-07-15 17:08:27.105946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.626 [2024-07-15 17:08:27.115452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.626 [2024-07-15 17:08:27.115887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.626 [2024-07-15 17:08:27.115929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.626 [2024-07-15 17:08:27.115949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.626 [2024-07-15 17:08:27.116544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.626 [2024-07-15 17:08:27.117090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.626 [2024-07-15 17:08:27.117098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.626 [2024-07-15 17:08:27.117104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.626 [2024-07-15 17:08:27.119786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.626 [2024-07-15 17:08:27.128307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.626 [2024-07-15 17:08:27.128730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.626 [2024-07-15 17:08:27.128746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.626 [2024-07-15 17:08:27.128752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.626 [2024-07-15 17:08:27.128923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.626 [2024-07-15 17:08:27.129094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.626 [2024-07-15 17:08:27.129102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.626 [2024-07-15 17:08:27.129108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.626 [2024-07-15 17:08:27.131797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.626 [2024-07-15 17:08:27.141218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.626 [2024-07-15 17:08:27.141694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.627 [2024-07-15 17:08:27.141709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.627 [2024-07-15 17:08:27.141716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.627 [2024-07-15 17:08:27.141888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.627 [2024-07-15 17:08:27.142063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.627 [2024-07-15 17:08:27.142070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.627 [2024-07-15 17:08:27.142077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.627 [2024-07-15 17:08:27.144770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.627 [2024-07-15 17:08:27.154081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.627 [2024-07-15 17:08:27.154547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.627 [2024-07-15 17:08:27.154562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.627 [2024-07-15 17:08:27.154569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.627 [2024-07-15 17:08:27.154742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.627 [2024-07-15 17:08:27.154914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.627 [2024-07-15 17:08:27.154922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.627 [2024-07-15 17:08:27.154928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.627 [2024-07-15 17:08:27.157770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.627 [2024-07-15 17:08:27.167128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.627 [2024-07-15 17:08:27.167604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.627 [2024-07-15 17:08:27.167620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.627 [2024-07-15 17:08:27.167627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.627 [2024-07-15 17:08:27.167803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.627 [2024-07-15 17:08:27.167979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.627 [2024-07-15 17:08:27.167987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.627 [2024-07-15 17:08:27.167993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.627 [2024-07-15 17:08:27.170819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.627 [2024-07-15 17:08:27.180172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.627 [2024-07-15 17:08:27.180537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.627 [2024-07-15 17:08:27.180553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.627 [2024-07-15 17:08:27.180560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.627 [2024-07-15 17:08:27.180732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.627 [2024-07-15 17:08:27.180902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.627 [2024-07-15 17:08:27.180910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.627 [2024-07-15 17:08:27.180916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.627 [2024-07-15 17:08:27.183609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.627 [2024-07-15 17:08:27.193033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.627 [2024-07-15 17:08:27.193451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.627 [2024-07-15 17:08:27.193467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.627 [2024-07-15 17:08:27.193474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.627 [2024-07-15 17:08:27.193645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.627 [2024-07-15 17:08:27.193817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.627 [2024-07-15 17:08:27.193825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.627 [2024-07-15 17:08:27.193831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.627 [2024-07-15 17:08:27.196522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.627 [2024-07-15 17:08:27.205948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.627 [2024-07-15 17:08:27.206275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.627 [2024-07-15 17:08:27.206290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.627 [2024-07-15 17:08:27.206298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.627 [2024-07-15 17:08:27.206461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.627 [2024-07-15 17:08:27.206623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.627 [2024-07-15 17:08:27.206631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.627 [2024-07-15 17:08:27.206637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.627 [2024-07-15 17:08:27.209309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.627 [2024-07-15 17:08:27.218772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.627 [2024-07-15 17:08:27.219238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.627 [2024-07-15 17:08:27.219254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.627 [2024-07-15 17:08:27.219260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.627 [2024-07-15 17:08:27.219433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.627 [2024-07-15 17:08:27.219605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.627 [2024-07-15 17:08:27.219613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.627 [2024-07-15 17:08:27.219618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.627 [2024-07-15 17:08:27.222239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.627 [2024-07-15 17:08:27.231676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.627 [2024-07-15 17:08:27.232146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.627 [2024-07-15 17:08:27.232187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.627 [2024-07-15 17:08:27.232216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.627 [2024-07-15 17:08:27.232700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.627 [2024-07-15 17:08:27.232873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.627 [2024-07-15 17:08:27.232881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.627 [2024-07-15 17:08:27.232887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.627 [2024-07-15 17:08:27.235573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.627 [2024-07-15 17:08:27.244674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.627 [2024-07-15 17:08:27.245054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.627 [2024-07-15 17:08:27.245069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.627 [2024-07-15 17:08:27.245076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.627 [2024-07-15 17:08:27.245254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.627 [2024-07-15 17:08:27.245427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.627 [2024-07-15 17:08:27.245435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.627 [2024-07-15 17:08:27.245441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.627 [2024-07-15 17:08:27.248124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.627 [2024-07-15 17:08:27.257733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.627 [2024-07-15 17:08:27.258178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.627 [2024-07-15 17:08:27.258220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.627 [2024-07-15 17:08:27.258254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.627 [2024-07-15 17:08:27.258661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.627 [2024-07-15 17:08:27.258833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.627 [2024-07-15 17:08:27.258841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.627 [2024-07-15 17:08:27.258847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.627 [2024-07-15 17:08:27.261604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.627 [2024-07-15 17:08:27.270635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.627 [2024-07-15 17:08:27.271096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.627 [2024-07-15 17:08:27.271138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.627 [2024-07-15 17:08:27.271159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.627 [2024-07-15 17:08:27.271538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.627 [2024-07-15 17:08:27.271711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.627 [2024-07-15 17:08:27.271721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.627 [2024-07-15 17:08:27.271727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.628 [2024-07-15 17:08:27.274411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.628 [2024-07-15 17:08:27.283497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.628 [2024-07-15 17:08:27.283944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.628 [2024-07-15 17:08:27.283960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.628 [2024-07-15 17:08:27.283966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.628 [2024-07-15 17:08:27.284138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.628 [2024-07-15 17:08:27.284316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.628 [2024-07-15 17:08:27.284324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.628 [2024-07-15 17:08:27.284330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.628 [2024-07-15 17:08:27.287008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.888 [2024-07-15 17:08:27.296563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.888 [2024-07-15 17:08:27.297022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.888 [2024-07-15 17:08:27.297065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.888 [2024-07-15 17:08:27.297086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.888 [2024-07-15 17:08:27.297624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.888 [2024-07-15 17:08:27.297879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.888 [2024-07-15 17:08:27.297890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.888 [2024-07-15 17:08:27.297898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.888 [2024-07-15 17:08:27.301974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.888 [2024-07-15 17:08:27.309820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.888 [2024-07-15 17:08:27.310251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.888 [2024-07-15 17:08:27.310267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.888 [2024-07-15 17:08:27.310274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.888 [2024-07-15 17:08:27.310441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.888 [2024-07-15 17:08:27.310607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.888 [2024-07-15 17:08:27.310615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.888 [2024-07-15 17:08:27.310621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.888 [2024-07-15 17:08:27.313343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.888 [2024-07-15 17:08:27.322842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.888 [2024-07-15 17:08:27.323234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.888 [2024-07-15 17:08:27.323249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.888 [2024-07-15 17:08:27.323256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.888 [2024-07-15 17:08:27.323428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.888 [2024-07-15 17:08:27.323599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.888 [2024-07-15 17:08:27.323606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.888 [2024-07-15 17:08:27.323613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.888 [2024-07-15 17:08:27.326371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.888 [2024-07-15 17:08:27.335674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.888 [2024-07-15 17:08:27.336063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.888 [2024-07-15 17:08:27.336078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.888 [2024-07-15 17:08:27.336085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.888 [2024-07-15 17:08:27.336262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.888 [2024-07-15 17:08:27.336434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.888 [2024-07-15 17:08:27.336442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.888 [2024-07-15 17:08:27.336448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.888 [2024-07-15 17:08:27.339128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.888 [2024-07-15 17:08:27.348601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.888 [2024-07-15 17:08:27.349016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.888 [2024-07-15 17:08:27.349032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.888 [2024-07-15 17:08:27.349039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.888 [2024-07-15 17:08:27.349211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.888 [2024-07-15 17:08:27.349390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.888 [2024-07-15 17:08:27.349398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.888 [2024-07-15 17:08:27.349404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.888 [2024-07-15 17:08:27.352092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.888 [2024-07-15 17:08:27.361520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.888 [2024-07-15 17:08:27.362005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.888 [2024-07-15 17:08:27.362045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.888 [2024-07-15 17:08:27.362066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.888 [2024-07-15 17:08:27.362593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.888 [2024-07-15 17:08:27.362766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.888 [2024-07-15 17:08:27.362774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.888 [2024-07-15 17:08:27.362780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.888 [2024-07-15 17:08:27.365471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.888 [2024-07-15 17:08:27.374327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.888 [2024-07-15 17:08:27.374789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.888 [2024-07-15 17:08:27.374830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.888 [2024-07-15 17:08:27.374851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.888 [2024-07-15 17:08:27.375445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.888 [2024-07-15 17:08:27.375893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.888 [2024-07-15 17:08:27.375917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.888 [2024-07-15 17:08:27.375923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.888 [2024-07-15 17:08:27.378642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.888 [2024-07-15 17:08:27.387190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.888 [2024-07-15 17:08:27.387662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.888 [2024-07-15 17:08:27.387704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.888 [2024-07-15 17:08:27.387725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.888 [2024-07-15 17:08:27.388320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.888 [2024-07-15 17:08:27.388869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.888 [2024-07-15 17:08:27.388879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.888 [2024-07-15 17:08:27.388888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.888 [2024-07-15 17:08:27.392956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.888 [2024-07-15 17:08:27.400718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.888 [2024-07-15 17:08:27.401179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.888 [2024-07-15 17:08:27.401194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.888 [2024-07-15 17:08:27.401201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.888 [2024-07-15 17:08:27.401393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.888 [2024-07-15 17:08:27.401565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.888 [2024-07-15 17:08:27.401573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.888 [2024-07-15 17:08:27.401582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.888 [2024-07-15 17:08:27.404303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.888 [2024-07-15 17:08:27.413859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.888 [2024-07-15 17:08:27.414332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.888 [2024-07-15 17:08:27.414348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.888 [2024-07-15 17:08:27.414354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.888 [2024-07-15 17:08:27.414526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.888 [2024-07-15 17:08:27.414697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.888 [2024-07-15 17:08:27.414705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.888 [2024-07-15 17:08:27.414711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.888 [2024-07-15 17:08:27.417461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.889 [2024-07-15 17:08:27.426957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.889 [2024-07-15 17:08:27.427359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.889 [2024-07-15 17:08:27.427386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.889 [2024-07-15 17:08:27.427393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.889 [2024-07-15 17:08:27.427556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.889 [2024-07-15 17:08:27.427718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.889 [2024-07-15 17:08:27.427725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.889 [2024-07-15 17:08:27.427731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.889 [2024-07-15 17:08:27.430408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.889 [2024-07-15 17:08:27.439818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.889 [2024-07-15 17:08:27.440269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.889 [2024-07-15 17:08:27.440285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.889 [2024-07-15 17:08:27.440292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.889 [2024-07-15 17:08:27.440471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.889 [2024-07-15 17:08:27.440633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.889 [2024-07-15 17:08:27.440640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.889 [2024-07-15 17:08:27.440646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.889 [2024-07-15 17:08:27.443386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.889 [2024-07-15 17:08:27.452610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.889 [2024-07-15 17:08:27.453065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.889 [2024-07-15 17:08:27.453113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.889 [2024-07-15 17:08:27.453136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.889 [2024-07-15 17:08:27.453732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.889 [2024-07-15 17:08:27.454120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.889 [2024-07-15 17:08:27.454127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.889 [2024-07-15 17:08:27.454133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.889 [2024-07-15 17:08:27.456822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.889 [2024-07-15 17:08:27.465523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.889 [2024-07-15 17:08:27.465871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.889 [2024-07-15 17:08:27.465886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.889 [2024-07-15 17:08:27.465892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.889 [2024-07-15 17:08:27.466055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.889 [2024-07-15 17:08:27.466217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.889 [2024-07-15 17:08:27.466230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.889 [2024-07-15 17:08:27.466236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.889 [2024-07-15 17:08:27.468939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.889 [2024-07-15 17:08:27.478517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.889 [2024-07-15 17:08:27.478967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.889 [2024-07-15 17:08:27.478981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.889 [2024-07-15 17:08:27.478987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.889 [2024-07-15 17:08:27.479149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.889 [2024-07-15 17:08:27.479335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.889 [2024-07-15 17:08:27.479343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.889 [2024-07-15 17:08:27.479350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.889 [2024-07-15 17:08:27.482069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.889 [2024-07-15 17:08:27.491376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.889 [2024-07-15 17:08:27.491856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.889 [2024-07-15 17:08:27.491871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.889 [2024-07-15 17:08:27.491878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.889 [2024-07-15 17:08:27.492050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.889 [2024-07-15 17:08:27.492236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.889 [2024-07-15 17:08:27.492244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.889 [2024-07-15 17:08:27.492250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.889 [2024-07-15 17:08:27.494936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.889 [2024-07-15 17:08:27.504296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.889 [2024-07-15 17:08:27.504769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.889 [2024-07-15 17:08:27.504810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.889 [2024-07-15 17:08:27.504833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.889 [2024-07-15 17:08:27.505427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.889 [2024-07-15 17:08:27.506009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.889 [2024-07-15 17:08:27.506032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.889 [2024-07-15 17:08:27.506052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.889 [2024-07-15 17:08:27.508867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.889 [2024-07-15 17:08:27.517189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.889 [2024-07-15 17:08:27.517587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.889 [2024-07-15 17:08:27.517629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.889 [2024-07-15 17:08:27.517651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.889 [2024-07-15 17:08:27.518166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.889 [2024-07-15 17:08:27.518355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.889 [2024-07-15 17:08:27.518364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.889 [2024-07-15 17:08:27.518370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.889 [2024-07-15 17:08:27.521054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.889 [2024-07-15 17:08:27.530121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.889 [2024-07-15 17:08:27.530617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.889 [2024-07-15 17:08:27.530658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.889 [2024-07-15 17:08:27.530679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.889 [2024-07-15 17:08:27.531238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.889 [2024-07-15 17:08:27.531410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.889 [2024-07-15 17:08:27.531418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.889 [2024-07-15 17:08:27.531425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.889 [2024-07-15 17:08:27.534109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.889 [2024-07-15 17:08:27.542953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.889 [2024-07-15 17:08:27.543399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.889 [2024-07-15 17:08:27.543414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:20.889 [2024-07-15 17:08:27.543421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:20.889 [2024-07-15 17:08:27.543584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:20.890 [2024-07-15 17:08:27.543746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.890 [2024-07-15 17:08:27.543753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.890 [2024-07-15 17:08:27.543759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.890 [2024-07-15 17:08:27.546437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.150 [2024-07-15 17:08:27.556056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.150 [2024-07-15 17:08:27.556503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-15 17:08:27.556519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.150 [2024-07-15 17:08:27.556526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.150 [2024-07-15 17:08:27.556702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.150 [2024-07-15 17:08:27.556879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.150 [2024-07-15 17:08:27.556886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.150 [2024-07-15 17:08:27.556893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.150 [2024-07-15 17:08:27.559640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.150 [2024-07-15 17:08:27.568963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.150 [2024-07-15 17:08:27.569431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-15 17:08:27.569446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.150 [2024-07-15 17:08:27.569452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.150 [2024-07-15 17:08:27.569614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.150 [2024-07-15 17:08:27.569775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.150 [2024-07-15 17:08:27.569783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.150 [2024-07-15 17:08:27.569788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.150 [2024-07-15 17:08:27.572459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.150 [2024-07-15 17:08:27.581879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.150 [2024-07-15 17:08:27.582309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-15 17:08:27.582352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.150 [2024-07-15 17:08:27.582381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.150 [2024-07-15 17:08:27.582963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.150 [2024-07-15 17:08:27.583207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.150 [2024-07-15 17:08:27.583215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.150 [2024-07-15 17:08:27.583221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.150 [2024-07-15 17:08:27.585914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.150 [2024-07-15 17:08:27.594714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.150 [2024-07-15 17:08:27.595161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-15 17:08:27.595177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.150 [2024-07-15 17:08:27.595183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.150 [2024-07-15 17:08:27.595361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.150 [2024-07-15 17:08:27.595534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.150 [2024-07-15 17:08:27.595542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.150 [2024-07-15 17:08:27.595549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.150 [2024-07-15 17:08:27.598247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.150 [2024-07-15 17:08:27.607628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.150 [2024-07-15 17:08:27.608026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-15 17:08:27.608042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.150 [2024-07-15 17:08:27.608048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.150 [2024-07-15 17:08:27.608220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.150 [2024-07-15 17:08:27.608398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.150 [2024-07-15 17:08:27.608406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.150 [2024-07-15 17:08:27.608412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.150 [2024-07-15 17:08:27.611099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.150 [2024-07-15 17:08:27.620646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.150 [2024-07-15 17:08:27.621022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-15 17:08:27.621038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.150 [2024-07-15 17:08:27.621045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.150 [2024-07-15 17:08:27.621222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.150 [2024-07-15 17:08:27.621405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.150 [2024-07-15 17:08:27.621416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.150 [2024-07-15 17:08:27.621423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.150 [2024-07-15 17:08:27.624269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.150 [2024-07-15 17:08:27.633457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.150 [2024-07-15 17:08:27.634399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-15 17:08:27.634420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.150 [2024-07-15 17:08:27.634428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.150 [2024-07-15 17:08:27.634606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.150 [2024-07-15 17:08:27.634779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.150 [2024-07-15 17:08:27.634787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.150 [2024-07-15 17:08:27.634794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.150 [2024-07-15 17:08:27.637489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.151 [2024-07-15 17:08:27.646376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.151 [2024-07-15 17:08:27.646702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-15 17:08:27.646718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.151 [2024-07-15 17:08:27.646725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.151 [2024-07-15 17:08:27.646898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.151 [2024-07-15 17:08:27.647070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.151 [2024-07-15 17:08:27.647078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.151 [2024-07-15 17:08:27.647084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.151 [2024-07-15 17:08:27.649779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.151 [2024-07-15 17:08:27.659346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.151 [2024-07-15 17:08:27.659764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-15 17:08:27.659781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.151 [2024-07-15 17:08:27.659788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.151 [2024-07-15 17:08:27.659960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.151 [2024-07-15 17:08:27.660132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.151 [2024-07-15 17:08:27.660139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.151 [2024-07-15 17:08:27.660145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.151 [2024-07-15 17:08:27.662982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.151 [2024-07-15 17:08:27.672455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.151 [2024-07-15 17:08:27.672895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-15 17:08:27.672930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.151 [2024-07-15 17:08:27.672952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.151 [2024-07-15 17:08:27.673537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.151 [2024-07-15 17:08:27.673711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.151 [2024-07-15 17:08:27.673718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.151 [2024-07-15 17:08:27.673725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.151 [2024-07-15 17:08:27.676499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.151 [2024-07-15 17:08:27.685425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.151 [2024-07-15 17:08:27.685750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-15 17:08:27.685767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.151 [2024-07-15 17:08:27.685774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.151 [2024-07-15 17:08:27.685947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.151 [2024-07-15 17:08:27.686118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.151 [2024-07-15 17:08:27.686126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.151 [2024-07-15 17:08:27.686132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.151 [2024-07-15 17:08:27.688824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.151 [2024-07-15 17:08:27.698315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.151 [2024-07-15 17:08:27.698785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-15 17:08:27.698827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.151 [2024-07-15 17:08:27.698849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.151 [2024-07-15 17:08:27.699444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.151 [2024-07-15 17:08:27.699812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.151 [2024-07-15 17:08:27.699820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.151 [2024-07-15 17:08:27.699826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.151 [2024-07-15 17:08:27.702513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.151 [2024-07-15 17:08:27.711182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.151 [2024-07-15 17:08:27.711560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-15 17:08:27.711577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.151 [2024-07-15 17:08:27.711583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.151 [2024-07-15 17:08:27.711759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.151 [2024-07-15 17:08:27.711930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.151 [2024-07-15 17:08:27.711938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.151 [2024-07-15 17:08:27.711944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.151 [2024-07-15 17:08:27.714635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.151 [2024-07-15 17:08:27.724042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.151 [2024-07-15 17:08:27.724360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-15 17:08:27.724376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.151 [2024-07-15 17:08:27.724382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.151 [2024-07-15 17:08:27.724553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.151 [2024-07-15 17:08:27.724724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.151 [2024-07-15 17:08:27.724732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.151 [2024-07-15 17:08:27.724738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.151 [2024-07-15 17:08:27.727459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.151 [2024-07-15 17:08:27.736922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.151 [2024-07-15 17:08:27.737312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-15 17:08:27.737354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.151 [2024-07-15 17:08:27.737375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.151 [2024-07-15 17:08:27.737870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.151 [2024-07-15 17:08:27.738042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.151 [2024-07-15 17:08:27.738050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.151 [2024-07-15 17:08:27.738056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.151 [2024-07-15 17:08:27.740750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.151 [2024-07-15 17:08:27.749742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.152 [2024-07-15 17:08:27.750064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-15 17:08:27.750079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.152 [2024-07-15 17:08:27.750086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.152 [2024-07-15 17:08:27.750263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.152 [2024-07-15 17:08:27.750435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.152 [2024-07-15 17:08:27.750443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.152 [2024-07-15 17:08:27.750452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.152 [2024-07-15 17:08:27.753142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.152 [2024-07-15 17:08:27.762613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.152 [2024-07-15 17:08:27.763030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-15 17:08:27.763046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.152 [2024-07-15 17:08:27.763052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.152 [2024-07-15 17:08:27.763229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.152 [2024-07-15 17:08:27.763401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.152 [2024-07-15 17:08:27.763408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.152 [2024-07-15 17:08:27.763414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.152 [2024-07-15 17:08:27.766098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.152 [2024-07-15 17:08:27.775418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.152 [2024-07-15 17:08:27.775861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-15 17:08:27.775907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.152 [2024-07-15 17:08:27.775928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.152 [2024-07-15 17:08:27.776525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.152 [2024-07-15 17:08:27.776715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.152 [2024-07-15 17:08:27.776723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.152 [2024-07-15 17:08:27.776729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.152 [2024-07-15 17:08:27.779453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.152 [2024-07-15 17:08:27.788310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.152 [2024-07-15 17:08:27.788710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-15 17:08:27.788762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.152 [2024-07-15 17:08:27.788784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.152 [2024-07-15 17:08:27.789377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.152 [2024-07-15 17:08:27.789631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.152 [2024-07-15 17:08:27.789639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.152 [2024-07-15 17:08:27.789645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.152 [2024-07-15 17:08:27.792336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.152 [2024-07-15 17:08:27.801210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.152 [2024-07-15 17:08:27.801692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-15 17:08:27.801732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.152 [2024-07-15 17:08:27.801755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.152 [2024-07-15 17:08:27.802346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.152 [2024-07-15 17:08:27.802852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.152 [2024-07-15 17:08:27.802864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.152 [2024-07-15 17:08:27.802873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.152 [2024-07-15 17:08:27.806947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.152 [2024-07-15 17:08:27.814788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.152 [2024-07-15 17:08:27.815194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-15 17:08:27.815210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.152 [2024-07-15 17:08:27.815217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.152 [2024-07-15 17:08:27.815399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.152 [2024-07-15 17:08:27.815576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.152 [2024-07-15 17:08:27.815584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.152 [2024-07-15 17:08:27.815590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.412 [2024-07-15 17:08:27.818475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.412 [2024-07-15 17:08:27.827867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.412 [2024-07-15 17:08:27.828248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.412 [2024-07-15 17:08:27.828264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.412 [2024-07-15 17:08:27.828270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.412 [2024-07-15 17:08:27.828443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.412 [2024-07-15 17:08:27.828615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.412 [2024-07-15 17:08:27.828623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.412 [2024-07-15 17:08:27.828629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.412 [2024-07-15 17:08:27.831387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.412 [2024-07-15 17:08:27.840874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.412 [2024-07-15 17:08:27.841245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.412 [2024-07-15 17:08:27.841260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.412 [2024-07-15 17:08:27.841267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.412 [2024-07-15 17:08:27.841439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.412 [2024-07-15 17:08:27.841615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.412 [2024-07-15 17:08:27.841623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.412 [2024-07-15 17:08:27.841629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.412 [2024-07-15 17:08:27.844389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.412 [2024-07-15 17:08:27.853978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.412 [2024-07-15 17:08:27.854344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.412 [2024-07-15 17:08:27.854388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.412 [2024-07-15 17:08:27.854410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.412 [2024-07-15 17:08:27.854899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.412 [2024-07-15 17:08:27.855070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.412 [2024-07-15 17:08:27.855079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.412 [2024-07-15 17:08:27.855085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.412 [2024-07-15 17:08:27.857764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.412 [2024-07-15 17:08:27.866919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.412 [2024-07-15 17:08:27.867254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.412 [2024-07-15 17:08:27.867270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.412 [2024-07-15 17:08:27.867277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.412 [2024-07-15 17:08:27.867448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.412 [2024-07-15 17:08:27.867620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.412 [2024-07-15 17:08:27.867629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.412 [2024-07-15 17:08:27.867635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.412 [2024-07-15 17:08:27.870345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.412 [2024-07-15 17:08:27.879984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.412 [2024-07-15 17:08:27.880297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.412 [2024-07-15 17:08:27.880313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.412 [2024-07-15 17:08:27.880320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.412 [2024-07-15 17:08:27.880491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.412 [2024-07-15 17:08:27.880663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.412 [2024-07-15 17:08:27.880671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.413 [2024-07-15 17:08:27.880677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.413 [2024-07-15 17:08:27.883435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.413 [2024-07-15 17:08:27.892959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.413 [2024-07-15 17:08:27.893364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.413 [2024-07-15 17:08:27.893380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.413 [2024-07-15 17:08:27.893387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.413 [2024-07-15 17:08:27.893563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.413 [2024-07-15 17:08:27.893726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.413 [2024-07-15 17:08:27.893733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.413 [2024-07-15 17:08:27.893739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.413 [2024-07-15 17:08:27.896418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.413 [2024-07-15 17:08:27.905890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.413 [2024-07-15 17:08:27.906222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.413 [2024-07-15 17:08:27.906241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.413 [2024-07-15 17:08:27.906248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.413 [2024-07-15 17:08:27.906419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.413 [2024-07-15 17:08:27.906590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.413 [2024-07-15 17:08:27.906598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.413 [2024-07-15 17:08:27.906604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.413 [2024-07-15 17:08:27.909295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.413 [2024-07-15 17:08:27.918756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.413 [2024-07-15 17:08:27.919077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.413 [2024-07-15 17:08:27.919093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.413 [2024-07-15 17:08:27.919099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.413 [2024-07-15 17:08:27.919277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.413 [2024-07-15 17:08:27.919449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.413 [2024-07-15 17:08:27.919457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.413 [2024-07-15 17:08:27.919463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.413 [2024-07-15 17:08:27.922292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.413 [2024-07-15 17:08:27.931827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.413 [2024-07-15 17:08:27.932245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.413 [2024-07-15 17:08:27.932272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.413 [2024-07-15 17:08:27.932282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.413 [2024-07-15 17:08:27.932460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.413 [2024-07-15 17:08:27.932638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.413 [2024-07-15 17:08:27.932646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.413 [2024-07-15 17:08:27.932652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.413 [2024-07-15 17:08:27.935490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.413 [2024-07-15 17:08:27.945024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.413 [2024-07-15 17:08:27.945513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.413 [2024-07-15 17:08:27.945529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.413 [2024-07-15 17:08:27.945536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.413 [2024-07-15 17:08:27.945712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.413 [2024-07-15 17:08:27.945888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.413 [2024-07-15 17:08:27.945896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.413 [2024-07-15 17:08:27.945902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.413 [2024-07-15 17:08:27.948742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.413 [2024-07-15 17:08:27.958229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.413 [2024-07-15 17:08:27.958608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.413 [2024-07-15 17:08:27.958624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.413 [2024-07-15 17:08:27.958631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.413 [2024-07-15 17:08:27.958813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.413 [2024-07-15 17:08:27.958997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.413 [2024-07-15 17:08:27.959005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.413 [2024-07-15 17:08:27.959011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.413 [2024-07-15 17:08:27.961953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.413 [2024-07-15 17:08:27.971743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.413 [2024-07-15 17:08:27.972219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.413 [2024-07-15 17:08:27.972241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.413 [2024-07-15 17:08:27.972248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.413 [2024-07-15 17:08:27.972452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.413 [2024-07-15 17:08:27.972636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.413 [2024-07-15 17:08:27.972647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.413 [2024-07-15 17:08:27.972654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.413 [2024-07-15 17:08:27.975574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.413 [2024-07-15 17:08:27.984791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.413 [2024-07-15 17:08:27.985268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.413 [2024-07-15 17:08:27.985310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.413 [2024-07-15 17:08:27.985332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.413 [2024-07-15 17:08:27.985910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.413 [2024-07-15 17:08:27.986421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.413 [2024-07-15 17:08:27.986429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.413 [2024-07-15 17:08:27.986435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.413 [2024-07-15 17:08:27.989272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.413 [2024-07-15 17:08:27.997967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.413 [2024-07-15 17:08:27.998460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.413 [2024-07-15 17:08:27.998503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.413 [2024-07-15 17:08:27.998525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.413 [2024-07-15 17:08:27.999105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.413 [2024-07-15 17:08:27.999326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.413 [2024-07-15 17:08:27.999335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.413 [2024-07-15 17:08:27.999341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.413 [2024-07-15 17:08:28.002083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.413 [2024-07-15 17:08:28.010786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.413 [2024-07-15 17:08:28.011129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.413 [2024-07-15 17:08:28.011144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.413 [2024-07-15 17:08:28.011150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.413 [2024-07-15 17:08:28.011339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.413 [2024-07-15 17:08:28.011511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.413 [2024-07-15 17:08:28.011519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.413 [2024-07-15 17:08:28.011525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.413 [2024-07-15 17:08:28.014209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.413 [2024-07-15 17:08:28.023616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.413 [2024-07-15 17:08:28.024079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.413 [2024-07-15 17:08:28.024121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.413 [2024-07-15 17:08:28.024143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.413 [2024-07-15 17:08:28.024738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.413 [2024-07-15 17:08:28.025333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.413 [2024-07-15 17:08:28.025357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.413 [2024-07-15 17:08:28.025377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.413 [2024-07-15 17:08:28.029490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.413 [2024-07-15 17:08:28.037183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.413 [2024-07-15 17:08:28.037644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.413 [2024-07-15 17:08:28.037660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.413 [2024-07-15 17:08:28.037667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.413 [2024-07-15 17:08:28.037839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.413 [2024-07-15 17:08:28.038010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.413 [2024-07-15 17:08:28.038018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.413 [2024-07-15 17:08:28.038024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.413 [2024-07-15 17:08:28.040747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.413 [2024-07-15 17:08:28.049988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.413 [2024-07-15 17:08:28.050471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.413 [2024-07-15 17:08:28.050513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.413 [2024-07-15 17:08:28.050534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.414 [2024-07-15 17:08:28.051113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.414 [2024-07-15 17:08:28.051581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.414 [2024-07-15 17:08:28.051590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.414 [2024-07-15 17:08:28.051596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.414 [2024-07-15 17:08:28.054278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.414 [2024-07-15 17:08:28.062908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.414 [2024-07-15 17:08:28.063249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.414 [2024-07-15 17:08:28.063264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.414 [2024-07-15 17:08:28.063270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.414 [2024-07-15 17:08:28.063437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.414 [2024-07-15 17:08:28.063599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.414 [2024-07-15 17:08:28.063606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.414 [2024-07-15 17:08:28.063612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.414 [2024-07-15 17:08:28.066243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.414 [2024-07-15 17:08:28.075857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.414 [2024-07-15 17:08:28.076305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.414 [2024-07-15 17:08:28.076321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.414 [2024-07-15 17:08:28.076328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.414 [2024-07-15 17:08:28.076506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.414 [2024-07-15 17:08:28.076687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.414 [2024-07-15 17:08:28.076694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.414 [2024-07-15 17:08:28.076702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.414 [2024-07-15 17:08:28.079581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.674 [2024-07-15 17:08:28.088819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.674 [2024-07-15 17:08:28.089287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.674 [2024-07-15 17:08:28.089303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.674 [2024-07-15 17:08:28.089309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.674 [2024-07-15 17:08:28.089491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.674 [2024-07-15 17:08:28.089654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.674 [2024-07-15 17:08:28.089661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.674 [2024-07-15 17:08:28.089667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.674 [2024-07-15 17:08:28.092342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.674 [2024-07-15 17:08:28.101635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.674 [2024-07-15 17:08:28.101980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.674 [2024-07-15 17:08:28.101994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.674 [2024-07-15 17:08:28.102001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.674 [2024-07-15 17:08:28.102163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.674 [2024-07-15 17:08:28.102352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.674 [2024-07-15 17:08:28.102360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.674 [2024-07-15 17:08:28.102369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.674 [2024-07-15 17:08:28.105051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.674 [2024-07-15 17:08:28.114500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.674 [2024-07-15 17:08:28.114901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.674 [2024-07-15 17:08:28.114916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.674 [2024-07-15 17:08:28.114922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.674 [2024-07-15 17:08:28.115084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.674 [2024-07-15 17:08:28.115269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.674 [2024-07-15 17:08:28.115278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.674 [2024-07-15 17:08:28.115284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.674 [2024-07-15 17:08:28.117963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.674 [2024-07-15 17:08:28.127355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.674 [2024-07-15 17:08:28.127756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.674 [2024-07-15 17:08:28.127772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.674 [2024-07-15 17:08:28.127779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.674 [2024-07-15 17:08:28.127951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.674 [2024-07-15 17:08:28.128122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.674 [2024-07-15 17:08:28.128130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.674 [2024-07-15 17:08:28.128136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.674 [2024-07-15 17:08:28.130827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.674 [2024-07-15 17:08:28.140234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.674 [2024-07-15 17:08:28.140686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.674 [2024-07-15 17:08:28.140700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.674 [2024-07-15 17:08:28.140707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.674 [2024-07-15 17:08:28.140869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.674 [2024-07-15 17:08:28.141031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.674 [2024-07-15 17:08:28.141038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.674 [2024-07-15 17:08:28.141044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.674 [2024-07-15 17:08:28.143737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.674 [2024-07-15 17:08:28.153033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.674 [2024-07-15 17:08:28.153485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.674 [2024-07-15 17:08:28.153500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.674 [2024-07-15 17:08:28.153507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.674 [2024-07-15 17:08:28.153678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.674 [2024-07-15 17:08:28.153850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.674 [2024-07-15 17:08:28.153858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.674 [2024-07-15 17:08:28.153864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.674 [2024-07-15 17:08:28.156551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.674 [2024-07-15 17:08:28.165905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.674 [2024-07-15 17:08:28.166352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.674 [2024-07-15 17:08:28.166367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.674 [2024-07-15 17:08:28.166373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.674 [2024-07-15 17:08:28.166536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.674 [2024-07-15 17:08:28.166698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.674 [2024-07-15 17:08:28.166706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.674 [2024-07-15 17:08:28.166711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.674 [2024-07-15 17:08:28.169389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.674 [2024-07-15 17:08:28.179109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.674 [2024-07-15 17:08:28.179557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.674 [2024-07-15 17:08:28.179573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.674 [2024-07-15 17:08:28.179580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.674 [2024-07-15 17:08:28.179752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.674 [2024-07-15 17:08:28.179924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.674 [2024-07-15 17:08:28.179932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.674 [2024-07-15 17:08:28.179938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.674 [2024-07-15 17:08:28.182686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.674 [2024-07-15 17:08:28.191992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.674 [2024-07-15 17:08:28.192445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.674 [2024-07-15 17:08:28.192461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.674 [2024-07-15 17:08:28.192468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.674 [2024-07-15 17:08:28.192641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.675 [2024-07-15 17:08:28.192815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.675 [2024-07-15 17:08:28.192822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.675 [2024-07-15 17:08:28.192828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.675 [2024-07-15 17:08:28.195577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.675 [2024-07-15 17:08:28.204848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.675 [2024-07-15 17:08:28.205306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.675 [2024-07-15 17:08:28.205349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.675 [2024-07-15 17:08:28.205370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.675 [2024-07-15 17:08:28.205610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.675 [2024-07-15 17:08:28.205773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.675 [2024-07-15 17:08:28.205780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.675 [2024-07-15 17:08:28.205786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.675 [2024-07-15 17:08:28.208461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.675 [2024-07-15 17:08:28.217775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.675 [2024-07-15 17:08:28.218131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.675 [2024-07-15 17:08:28.218146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.675 [2024-07-15 17:08:28.218153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.675 [2024-07-15 17:08:28.218330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.675 [2024-07-15 17:08:28.218502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.675 [2024-07-15 17:08:28.218510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.675 [2024-07-15 17:08:28.218516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.675 [2024-07-15 17:08:28.221243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.675 [2024-07-15 17:08:28.230643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.675 [2024-07-15 17:08:28.231069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.675 [2024-07-15 17:08:28.231083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.675 [2024-07-15 17:08:28.231090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.675 [2024-07-15 17:08:28.231274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.675 [2024-07-15 17:08:28.231446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.675 [2024-07-15 17:08:28.231454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.675 [2024-07-15 17:08:28.231460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.675 [2024-07-15 17:08:28.234145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.675 [2024-07-15 17:08:28.243495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.675 [2024-07-15 17:08:28.243832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.675 [2024-07-15 17:08:28.243846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.675 [2024-07-15 17:08:28.243852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.675 [2024-07-15 17:08:28.244013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.675 [2024-07-15 17:08:28.244175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.675 [2024-07-15 17:08:28.244183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.675 [2024-07-15 17:08:28.244188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.675 [2024-07-15 17:08:28.246890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.675 [2024-07-15 17:08:28.256352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.675 [2024-07-15 17:08:28.256793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.675 [2024-07-15 17:08:28.256808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.675 [2024-07-15 17:08:28.256815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.675 [2024-07-15 17:08:28.256987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.675 [2024-07-15 17:08:28.257157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.675 [2024-07-15 17:08:28.257165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.675 [2024-07-15 17:08:28.257171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.675 [2024-07-15 17:08:28.259859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.675 [2024-07-15 17:08:28.269161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.675 [2024-07-15 17:08:28.269617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.675 [2024-07-15 17:08:28.269659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.675 [2024-07-15 17:08:28.269681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.675 [2024-07-15 17:08:28.270124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.675 [2024-07-15 17:08:28.270300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.675 [2024-07-15 17:08:28.270309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.675 [2024-07-15 17:08:28.270315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.675 [2024-07-15 17:08:28.272994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.675 [2024-07-15 17:08:28.281977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.675 [2024-07-15 17:08:28.282448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.675 [2024-07-15 17:08:28.282464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.675 [2024-07-15 17:08:28.282476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.675 [2024-07-15 17:08:28.282648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.675 [2024-07-15 17:08:28.282819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.675 [2024-07-15 17:08:28.282828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.675 [2024-07-15 17:08:28.282834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.675 [2024-07-15 17:08:28.285523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.675 [2024-07-15 17:08:28.294840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.675 [2024-07-15 17:08:28.295293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.675 [2024-07-15 17:08:28.295309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.675 [2024-07-15 17:08:28.295315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.675 [2024-07-15 17:08:28.295487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.675 [2024-07-15 17:08:28.295658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.675 [2024-07-15 17:08:28.295666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.675 [2024-07-15 17:08:28.295672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.675 [2024-07-15 17:08:28.298364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.675 [2024-07-15 17:08:28.307706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.675 [2024-07-15 17:08:28.308190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.675 [2024-07-15 17:08:28.308244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.675 [2024-07-15 17:08:28.308267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.675 [2024-07-15 17:08:28.308750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.675 [2024-07-15 17:08:28.308922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.675 [2024-07-15 17:08:28.308929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.675 [2024-07-15 17:08:28.308935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.675 [2024-07-15 17:08:28.311719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.675 [2024-07-15 17:08:28.320576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.675 [2024-07-15 17:08:28.321035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.675 [2024-07-15 17:08:28.321077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.675 [2024-07-15 17:08:28.321098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.675 [2024-07-15 17:08:28.321688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.675 [2024-07-15 17:08:28.322195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.675 [2024-07-15 17:08:28.322206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.675 [2024-07-15 17:08:28.322212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.675 [2024-07-15 17:08:28.324942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.675 [2024-07-15 17:08:28.333440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.675 [2024-07-15 17:08:28.333896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.675 [2024-07-15 17:08:28.333936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.675 [2024-07-15 17:08:28.333958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.676 [2024-07-15 17:08:28.334554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.676 [2024-07-15 17:08:28.335049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.676 [2024-07-15 17:08:28.335057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.676 [2024-07-15 17:08:28.335063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.676 [2024-07-15 17:08:28.337881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.936 [2024-07-15 17:08:28.346515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.936 [2024-07-15 17:08:28.346974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.936 [2024-07-15 17:08:28.346989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.936 [2024-07-15 17:08:28.346995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.936 [2024-07-15 17:08:28.347172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.936 [2024-07-15 17:08:28.347356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.936 [2024-07-15 17:08:28.347364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.936 [2024-07-15 17:08:28.347370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.936 [2024-07-15 17:08:28.350086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.936 [2024-07-15 17:08:28.359394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.936 [2024-07-15 17:08:28.359840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.936 [2024-07-15 17:08:28.359855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.936 [2024-07-15 17:08:28.359861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.936 [2024-07-15 17:08:28.360023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.936 [2024-07-15 17:08:28.360185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.936 [2024-07-15 17:08:28.360192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.936 [2024-07-15 17:08:28.360198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.936 [2024-07-15 17:08:28.362900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.936 [2024-07-15 17:08:28.372206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.936 [2024-07-15 17:08:28.372684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.936 [2024-07-15 17:08:28.372699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.936 [2024-07-15 17:08:28.372706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.936 [2024-07-15 17:08:28.372878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.936 [2024-07-15 17:08:28.373049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.936 [2024-07-15 17:08:28.373057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.936 [2024-07-15 17:08:28.373063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.936 [2024-07-15 17:08:28.375753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.936 [2024-07-15 17:08:28.385050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.936 [2024-07-15 17:08:28.385502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.936 [2024-07-15 17:08:28.385517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.936 [2024-07-15 17:08:28.385524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.936 [2024-07-15 17:08:28.385695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.936 [2024-07-15 17:08:28.385867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.937 [2024-07-15 17:08:28.385874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.937 [2024-07-15 17:08:28.385880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.937 [2024-07-15 17:08:28.388572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.937 [2024-07-15 17:08:28.397883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.937 [2024-07-15 17:08:28.398358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.937 [2024-07-15 17:08:28.398400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.937 [2024-07-15 17:08:28.398421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.937 [2024-07-15 17:08:28.398970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.937 [2024-07-15 17:08:28.399133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.937 [2024-07-15 17:08:28.399141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.937 [2024-07-15 17:08:28.399146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.937 [2024-07-15 17:08:28.403013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.937 [2024-07-15 17:08:28.411526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.937 [2024-07-15 17:08:28.411979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.937 [2024-07-15 17:08:28.411994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.937 [2024-07-15 17:08:28.412000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.937 [2024-07-15 17:08:28.412170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.937 [2024-07-15 17:08:28.412360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.937 [2024-07-15 17:08:28.412369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.937 [2024-07-15 17:08:28.412375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.937 [2024-07-15 17:08:28.415096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.937 [2024-07-15 17:08:28.424354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.937 [2024-07-15 17:08:28.424789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.937 [2024-07-15 17:08:28.424804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.937 [2024-07-15 17:08:28.424811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.937 [2024-07-15 17:08:28.424983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.937 [2024-07-15 17:08:28.425155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.937 [2024-07-15 17:08:28.425162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.937 [2024-07-15 17:08:28.425168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.937 [2024-07-15 17:08:28.428009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.937 [2024-07-15 17:08:28.437374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.937 [2024-07-15 17:08:28.437808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.937 [2024-07-15 17:08:28.437823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.937 [2024-07-15 17:08:28.437829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.937 [2024-07-15 17:08:28.438000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.937 [2024-07-15 17:08:28.438171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.937 [2024-07-15 17:08:28.438179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.937 [2024-07-15 17:08:28.438185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.937 [2024-07-15 17:08:28.440940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.937 [2024-07-15 17:08:28.450347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.937 [2024-07-15 17:08:28.450795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.937 [2024-07-15 17:08:28.450810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.937 [2024-07-15 17:08:28.450817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.937 [2024-07-15 17:08:28.450988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.937 [2024-07-15 17:08:28.451160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.937 [2024-07-15 17:08:28.451168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.937 [2024-07-15 17:08:28.451177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.937 [2024-07-15 17:08:28.453931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 231755 Killed "${NVMF_APP[@]}" "$@" 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=233163 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 233163 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 233163 ']' 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:21.937 17:08:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:21.937 [2024-07-15 17:08:28.463474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.937 [2024-07-15 17:08:28.463907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.937 [2024-07-15 17:08:28.463923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.937 [2024-07-15 17:08:28.463930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.937 [2024-07-15 17:08:28.464108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.937 [2024-07-15 17:08:28.464290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.937 [2024-07-15 17:08:28.464298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.937 [2024-07-15 17:08:28.464305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.937 [2024-07-15 17:08:28.467141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.937 [2024-07-15 17:08:28.476523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.937 [2024-07-15 17:08:28.476908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.938 [2024-07-15 17:08:28.476923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.938 [2024-07-15 17:08:28.476930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.938 [2024-07-15 17:08:28.477107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.938 [2024-07-15 17:08:28.477291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.938 [2024-07-15 17:08:28.477300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.938 [2024-07-15 17:08:28.477309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.938 [2024-07-15 17:08:28.480147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.938 [2024-07-15 17:08:28.489663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.938 [2024-07-15 17:08:28.489673] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:26:21.938 [2024-07-15 17:08:28.489711] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.938 [2024-07-15 17:08:28.490122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.938 [2024-07-15 17:08:28.490137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.938 [2024-07-15 17:08:28.490143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.938 [2024-07-15 17:08:28.490324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.938 [2024-07-15 17:08:28.490497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.938 [2024-07-15 17:08:28.490504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.938 [2024-07-15 17:08:28.490511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.938 [2024-07-15 17:08:28.493262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.938 [2024-07-15 17:08:28.502784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.938 [2024-07-15 17:08:28.503140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.938 [2024-07-15 17:08:28.503155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.938 [2024-07-15 17:08:28.503162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.938 [2024-07-15 17:08:28.503343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.938 [2024-07-15 17:08:28.503515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.938 [2024-07-15 17:08:28.503523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.938 [2024-07-15 17:08:28.503529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.938 [2024-07-15 17:08:28.506285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.938 [2024-07-15 17:08:28.515872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.938 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.938 [2024-07-15 17:08:28.516243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.938 [2024-07-15 17:08:28.516259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.938 [2024-07-15 17:08:28.516265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.938 [2024-07-15 17:08:28.516438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.938 [2024-07-15 17:08:28.516609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.938 [2024-07-15 17:08:28.516616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.938 [2024-07-15 17:08:28.516622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.938 [2024-07-15 17:08:28.519394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.938 [2024-07-15 17:08:28.528924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.938 [2024-07-15 17:08:28.529383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.938 [2024-07-15 17:08:28.529399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.938 [2024-07-15 17:08:28.529407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.938 [2024-07-15 17:08:28.529584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.938 [2024-07-15 17:08:28.529761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.938 [2024-07-15 17:08:28.529769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.938 [2024-07-15 17:08:28.529776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.938 [2024-07-15 17:08:28.532674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.938 [2024-07-15 17:08:28.541993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.938 [2024-07-15 17:08:28.542434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.938 [2024-07-15 17:08:28.542451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.938 [2024-07-15 17:08:28.542458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.938 [2024-07-15 17:08:28.542629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.938 [2024-07-15 17:08:28.542800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.938 [2024-07-15 17:08:28.542808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.938 [2024-07-15 17:08:28.542814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.938 [2024-07-15 17:08:28.545563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.938 [2024-07-15 17:08:28.546131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:21.938 [2024-07-15 17:08:28.554984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.938 [2024-07-15 17:08:28.555447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.938 [2024-07-15 17:08:28.555464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.938 [2024-07-15 17:08:28.555471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.938 [2024-07-15 17:08:28.555643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.938 [2024-07-15 17:08:28.555815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.938 [2024-07-15 17:08:28.555823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.938 [2024-07-15 17:08:28.555829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.938 [2024-07-15 17:08:28.558616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.938 [2024-07-15 17:08:28.568102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.938 [2024-07-15 17:08:28.568567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.938 [2024-07-15 17:08:28.568587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.938 [2024-07-15 17:08:28.568593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.938 [2024-07-15 17:08:28.568766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.938 [2024-07-15 17:08:28.568938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.938 [2024-07-15 17:08:28.568946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.938 [2024-07-15 17:08:28.568953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.939 [2024-07-15 17:08:28.571711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.939 [2024-07-15 17:08:28.581134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.939 [2024-07-15 17:08:28.581606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.939 [2024-07-15 17:08:28.581622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.939 [2024-07-15 17:08:28.581629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.939 [2024-07-15 17:08:28.581801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.939 [2024-07-15 17:08:28.581973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.939 [2024-07-15 17:08:28.581981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.939 [2024-07-15 17:08:28.581987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.939 [2024-07-15 17:08:28.584749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.939 [2024-07-15 17:08:28.594182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.939 [2024-07-15 17:08:28.594590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.939 [2024-07-15 17:08:28.594610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:21.939 [2024-07-15 17:08:28.594618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:21.939 [2024-07-15 17:08:28.594791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:21.939 [2024-07-15 17:08:28.594965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.939 [2024-07-15 17:08:28.594973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.939 [2024-07-15 17:08:28.594980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.939 [2024-07-15 17:08:28.597736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.199 [2024-07-15 17:08:28.607242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.199 [2024-07-15 17:08:28.607717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.199 [2024-07-15 17:08:28.607733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.199 [2024-07-15 17:08:28.607740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.199 [2024-07-15 17:08:28.607917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.199 [2024-07-15 17:08:28.608100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.199 [2024-07-15 17:08:28.608109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.199 [2024-07-15 17:08:28.608115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.199 [2024-07-15 17:08:28.610960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.199 [2024-07-15 17:08:28.620251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.199 [2024-07-15 17:08:28.620705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.199 [2024-07-15 17:08:28.620721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.199 [2024-07-15 17:08:28.620728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.199 [2024-07-15 17:08:28.620905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.199 [2024-07-15 17:08:28.621090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.199 [2024-07-15 17:08:28.621098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.199 [2024-07-15 17:08:28.621104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.199 [2024-07-15 17:08:28.622349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.199 [2024-07-15 17:08:28.622378] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.199 [2024-07-15 17:08:28.622385] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.199 [2024-07-15 17:08:28.622392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.199 [2024-07-15 17:08:28.622397] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.199 [2024-07-15 17:08:28.622437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.199 [2024-07-15 17:08:28.622526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:22.199 [2024-07-15 17:08:28.622528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.199 [2024-07-15 17:08:28.623923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.199 [2024-07-15 17:08:28.633310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.199 [2024-07-15 17:08:28.633722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.199 [2024-07-15 17:08:28.633742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.199 [2024-07-15 17:08:28.633749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.199 [2024-07-15 17:08:28.633927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.199 [2024-07-15 17:08:28.634105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.199 [2024-07-15 17:08:28.634113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.199 [2024-07-15 17:08:28.634120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.199 [2024-07-15 17:08:28.636955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.199 [2024-07-15 17:08:28.646496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.199 [2024-07-15 17:08:28.646935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.199 [2024-07-15 17:08:28.646959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.199 [2024-07-15 17:08:28.646967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.199 [2024-07-15 17:08:28.647145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.199 [2024-07-15 17:08:28.647327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.199 [2024-07-15 17:08:28.647335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.199 [2024-07-15 17:08:28.647342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.199 [2024-07-15 17:08:28.650169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.199 [2024-07-15 17:08:28.659548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.199 [2024-07-15 17:08:28.660013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.199 [2024-07-15 17:08:28.660032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.199 [2024-07-15 17:08:28.660040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.199 [2024-07-15 17:08:28.660218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.199 [2024-07-15 17:08:28.660400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.199 [2024-07-15 17:08:28.660408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.199 [2024-07-15 17:08:28.660415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.199 [2024-07-15 17:08:28.663247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.199 [2024-07-15 17:08:28.672621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.199 [2024-07-15 17:08:28.673084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.199 [2024-07-15 17:08:28.673102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.199 [2024-07-15 17:08:28.673110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.199 [2024-07-15 17:08:28.673292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.199 [2024-07-15 17:08:28.673471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.199 [2024-07-15 17:08:28.673479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.199 [2024-07-15 17:08:28.673486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.199 [2024-07-15 17:08:28.676319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.199 [2024-07-15 17:08:28.685747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.199 [2024-07-15 17:08:28.686230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.199 [2024-07-15 17:08:28.686248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.199 [2024-07-15 17:08:28.686256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.199 [2024-07-15 17:08:28.686434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.199 [2024-07-15 17:08:28.686616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.199 [2024-07-15 17:08:28.686624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.199 [2024-07-15 17:08:28.686631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.199 [2024-07-15 17:08:28.689468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.199 [2024-07-15 17:08:28.698846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.199 [2024-07-15 17:08:28.699293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.199 [2024-07-15 17:08:28.699309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.199 [2024-07-15 17:08:28.699317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.199 [2024-07-15 17:08:28.699494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.199 [2024-07-15 17:08:28.699671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.199 [2024-07-15 17:08:28.699679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.199 [2024-07-15 17:08:28.699685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.200 [2024-07-15 17:08:28.702529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.200 [2024-07-15 17:08:28.711900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.200 [2024-07-15 17:08:28.712340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.200 [2024-07-15 17:08:28.712357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.200 [2024-07-15 17:08:28.712364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.200 [2024-07-15 17:08:28.712543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.200 [2024-07-15 17:08:28.712719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.200 [2024-07-15 17:08:28.712727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.200 [2024-07-15 17:08:28.712734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.200 [2024-07-15 17:08:28.715568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.200 [2024-07-15 17:08:28.725105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.200 [2024-07-15 17:08:28.725550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.200 [2024-07-15 17:08:28.725566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.200 [2024-07-15 17:08:28.725573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.200 [2024-07-15 17:08:28.725751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.200 [2024-07-15 17:08:28.725928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.200 [2024-07-15 17:08:28.725936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.200 [2024-07-15 17:08:28.725942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.200 [2024-07-15 17:08:28.728780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.200 [2024-07-15 17:08:28.738312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.200 [2024-07-15 17:08:28.738753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.200 [2024-07-15 17:08:28.738769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.200 [2024-07-15 17:08:28.738776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.200 [2024-07-15 17:08:28.738953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.200 [2024-07-15 17:08:28.739129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.200 [2024-07-15 17:08:28.739137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.200 [2024-07-15 17:08:28.739143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.200 [2024-07-15 17:08:28.741978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.200 [2024-07-15 17:08:28.751509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.200 [2024-07-15 17:08:28.751944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.200 [2024-07-15 17:08:28.751960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.200 [2024-07-15 17:08:28.751966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.200 [2024-07-15 17:08:28.752143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.200 [2024-07-15 17:08:28.752323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.200 [2024-07-15 17:08:28.752332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.200 [2024-07-15 17:08:28.752338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.200 [2024-07-15 17:08:28.755170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.200 [2024-07-15 17:08:28.764703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.200 [2024-07-15 17:08:28.765117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.200 [2024-07-15 17:08:28.765132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.200 [2024-07-15 17:08:28.765139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.200 [2024-07-15 17:08:28.765319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.200 [2024-07-15 17:08:28.765497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.200 [2024-07-15 17:08:28.765505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.200 [2024-07-15 17:08:28.765511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.200 [2024-07-15 17:08:28.768342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.200 [2024-07-15 17:08:28.777764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.200 [2024-07-15 17:08:28.778203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.200 [2024-07-15 17:08:28.778219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.200 [2024-07-15 17:08:28.778233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.200 [2024-07-15 17:08:28.778411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.200 [2024-07-15 17:08:28.778588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.200 [2024-07-15 17:08:28.778596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.200 [2024-07-15 17:08:28.778602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.200 [2024-07-15 17:08:28.781438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.200 [2024-07-15 17:08:28.790961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.200 [2024-07-15 17:08:28.791382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.200 [2024-07-15 17:08:28.791398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.200 [2024-07-15 17:08:28.791405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.200 [2024-07-15 17:08:28.791582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.200 [2024-07-15 17:08:28.791760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.200 [2024-07-15 17:08:28.791767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.200 [2024-07-15 17:08:28.791774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.200 [2024-07-15 17:08:28.794607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.200 [2024-07-15 17:08:28.804136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.200 [2024-07-15 17:08:28.804582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.200 [2024-07-15 17:08:28.804597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.200 [2024-07-15 17:08:28.804604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.200 [2024-07-15 17:08:28.804781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.200 [2024-07-15 17:08:28.804958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.200 [2024-07-15 17:08:28.804966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.200 [2024-07-15 17:08:28.804973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.200 [2024-07-15 17:08:28.807804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.200 [2024-07-15 17:08:28.817326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.200 [2024-07-15 17:08:28.817767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.200 [2024-07-15 17:08:28.817783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.200 [2024-07-15 17:08:28.817789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.200 [2024-07-15 17:08:28.817966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.200 [2024-07-15 17:08:28.818143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.200 [2024-07-15 17:08:28.818154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.200 [2024-07-15 17:08:28.818160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.200 [2024-07-15 17:08:28.820992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.200 [2024-07-15 17:08:28.830523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.200 [2024-07-15 17:08:28.830880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.200 [2024-07-15 17:08:28.830896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.200 [2024-07-15 17:08:28.830903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.200 [2024-07-15 17:08:28.831079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.200 [2024-07-15 17:08:28.831260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.201 [2024-07-15 17:08:28.831268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.201 [2024-07-15 17:08:28.831274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.201 [2024-07-15 17:08:28.834103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.201 [2024-07-15 17:08:28.843623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.201 [2024-07-15 17:08:28.844052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.201 [2024-07-15 17:08:28.844068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.201 [2024-07-15 17:08:28.844075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.201 [2024-07-15 17:08:28.844257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.201 [2024-07-15 17:08:28.844435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.201 [2024-07-15 17:08:28.844442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.201 [2024-07-15 17:08:28.844449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.201 [2024-07-15 17:08:28.847280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.201 [2024-07-15 17:08:28.856801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.201 [2024-07-15 17:08:28.857238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.201 [2024-07-15 17:08:28.857254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.201 [2024-07-15 17:08:28.857261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.201 [2024-07-15 17:08:28.857437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.201 [2024-07-15 17:08:28.857614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.201 [2024-07-15 17:08:28.857622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.201 [2024-07-15 17:08:28.857628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.201 [2024-07-15 17:08:28.860458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.461 [2024-07-15 17:08:28.869996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.461 [2024-07-15 17:08:28.870362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.461 [2024-07-15 17:08:28.870377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.461 [2024-07-15 17:08:28.870384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.461 [2024-07-15 17:08:28.870561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.461 [2024-07-15 17:08:28.870738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.461 [2024-07-15 17:08:28.870745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.461 [2024-07-15 17:08:28.870752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.461 [2024-07-15 17:08:28.873588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.461 [2024-07-15 17:08:28.883121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.461 [2024-07-15 17:08:28.883585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.461 [2024-07-15 17:08:28.883601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.461 [2024-07-15 17:08:28.883608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.461 [2024-07-15 17:08:28.883785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.461 [2024-07-15 17:08:28.883962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.461 [2024-07-15 17:08:28.883971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.461 [2024-07-15 17:08:28.883977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.461 [2024-07-15 17:08:28.886811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.461 [2024-07-15 17:08:28.896188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.461 [2024-07-15 17:08:28.896634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.461 [2024-07-15 17:08:28.896650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.461 [2024-07-15 17:08:28.896656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.461 [2024-07-15 17:08:28.896834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.461 [2024-07-15 17:08:28.897011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.461 [2024-07-15 17:08:28.897019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.461 [2024-07-15 17:08:28.897026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.461 [2024-07-15 17:08:28.899858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.461 [2024-07-15 17:08:28.909386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.461 [2024-07-15 17:08:28.909753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.461 [2024-07-15 17:08:28.909769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.461 [2024-07-15 17:08:28.909776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.461 [2024-07-15 17:08:28.909959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.461 [2024-07-15 17:08:28.910136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.461 [2024-07-15 17:08:28.910143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.461 [2024-07-15 17:08:28.910149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.461 [2024-07-15 17:08:28.912982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.461 [2024-07-15 17:08:28.922519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.461 [2024-07-15 17:08:28.922960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.461 [2024-07-15 17:08:28.922976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.461 [2024-07-15 17:08:28.922983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.461 [2024-07-15 17:08:28.923159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.461 [2024-07-15 17:08:28.923338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.461 [2024-07-15 17:08:28.923346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.461 [2024-07-15 17:08:28.923353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.461 [2024-07-15 17:08:28.926187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.461 [2024-07-15 17:08:28.935722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.461 [2024-07-15 17:08:28.936182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.461 [2024-07-15 17:08:28.936198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.461 [2024-07-15 17:08:28.936204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.461 [2024-07-15 17:08:28.936388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.461 [2024-07-15 17:08:28.936566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.461 [2024-07-15 17:08:28.936574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.461 [2024-07-15 17:08:28.936580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.461 [2024-07-15 17:08:28.939416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.461 [2024-07-15 17:08:28.948784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.461 [2024-07-15 17:08:28.949222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.461 [2024-07-15 17:08:28.949242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.461 [2024-07-15 17:08:28.949249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.461 [2024-07-15 17:08:28.949425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.461 [2024-07-15 17:08:28.949601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.461 [2024-07-15 17:08:28.949609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.461 [2024-07-15 17:08:28.949619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.461 [2024-07-15 17:08:28.952452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.461 [2024-07-15 17:08:28.961978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.461 [2024-07-15 17:08:28.962422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.461 [2024-07-15 17:08:28.962438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.461 [2024-07-15 17:08:28.962445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.461 [2024-07-15 17:08:28.962621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.462 [2024-07-15 17:08:28.962799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.462 [2024-07-15 17:08:28.962807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.462 [2024-07-15 17:08:28.962813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.462 [2024-07-15 17:08:28.965642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.462 [2024-07-15 17:08:28.975170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.462 [2024-07-15 17:08:28.975624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.462 [2024-07-15 17:08:28.975640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.462 [2024-07-15 17:08:28.975647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.462 [2024-07-15 17:08:28.975823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.462 [2024-07-15 17:08:28.976001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.462 [2024-07-15 17:08:28.976010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.462 [2024-07-15 17:08:28.976016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.462 [2024-07-15 17:08:28.978850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.462 [2024-07-15 17:08:28.988218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.462 [2024-07-15 17:08:28.988615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.462 [2024-07-15 17:08:28.988630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.462 [2024-07-15 17:08:28.988637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.462 [2024-07-15 17:08:28.988814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.462 [2024-07-15 17:08:28.988992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.462 [2024-07-15 17:08:28.989000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.462 [2024-07-15 17:08:28.989007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.462 [2024-07-15 17:08:28.991840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.462 [2024-07-15 17:08:29.001381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.462 [2024-07-15 17:08:29.001866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.462 [2024-07-15 17:08:29.001885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.462 [2024-07-15 17:08:29.001892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.462 [2024-07-15 17:08:29.002069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.462 [2024-07-15 17:08:29.002253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.462 [2024-07-15 17:08:29.002261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.462 [2024-07-15 17:08:29.002267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.462 [2024-07-15 17:08:29.005099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.462 [2024-07-15 17:08:29.014466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.462 [2024-07-15 17:08:29.014933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.462 [2024-07-15 17:08:29.014950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.462 [2024-07-15 17:08:29.014956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.462 [2024-07-15 17:08:29.015134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.462 [2024-07-15 17:08:29.015314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.462 [2024-07-15 17:08:29.015323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.462 [2024-07-15 17:08:29.015329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.462 [2024-07-15 17:08:29.018163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.462 [2024-07-15 17:08:29.027538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.462 [2024-07-15 17:08:29.027908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.462 [2024-07-15 17:08:29.027924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.462 [2024-07-15 17:08:29.027931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.462 [2024-07-15 17:08:29.028107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.462 [2024-07-15 17:08:29.028293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.462 [2024-07-15 17:08:29.028301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.462 [2024-07-15 17:08:29.028308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.462 [2024-07-15 17:08:29.031134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.462 [2024-07-15 17:08:29.040695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.462 [2024-07-15 17:08:29.041152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.462 [2024-07-15 17:08:29.041168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.462 [2024-07-15 17:08:29.041175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.462 [2024-07-15 17:08:29.041356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.462 [2024-07-15 17:08:29.041536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.462 [2024-07-15 17:08:29.041544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.462 [2024-07-15 17:08:29.041550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.462 [2024-07-15 17:08:29.044387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.462 [2024-07-15 17:08:29.053760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.462 [2024-07-15 17:08:29.054168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.462 [2024-07-15 17:08:29.054184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.462 [2024-07-15 17:08:29.054190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.462 [2024-07-15 17:08:29.054372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.462 [2024-07-15 17:08:29.054549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.462 [2024-07-15 17:08:29.054557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.462 [2024-07-15 17:08:29.054563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.462 [2024-07-15 17:08:29.057403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.462 [2024-07-15 17:08:29.066948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.462 [2024-07-15 17:08:29.067417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.462 [2024-07-15 17:08:29.067434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.462 [2024-07-15 17:08:29.067441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.462 [2024-07-15 17:08:29.067618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.462 [2024-07-15 17:08:29.067794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.462 [2024-07-15 17:08:29.067802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.462 [2024-07-15 17:08:29.067808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.462 [2024-07-15 17:08:29.070643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.462 [2024-07-15 17:08:29.080015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.462 [2024-07-15 17:08:29.080472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.462 [2024-07-15 17:08:29.080488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.462 [2024-07-15 17:08:29.080495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.462 [2024-07-15 17:08:29.080672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.462 [2024-07-15 17:08:29.080849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.462 [2024-07-15 17:08:29.080857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.462 [2024-07-15 17:08:29.080863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.462 [2024-07-15 17:08:29.083709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.462 [2024-07-15 17:08:29.093092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.462 [2024-07-15 17:08:29.093418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.462 [2024-07-15 17:08:29.093435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.462 [2024-07-15 17:08:29.093441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.462 [2024-07-15 17:08:29.093618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.462 [2024-07-15 17:08:29.093796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.462 [2024-07-15 17:08:29.093803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.462 [2024-07-15 17:08:29.093809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.462 [2024-07-15 17:08:29.096648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.462 [2024-07-15 17:08:29.106207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.462 [2024-07-15 17:08:29.106534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.462 [2024-07-15 17:08:29.106549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.462 [2024-07-15 17:08:29.106556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.463 [2024-07-15 17:08:29.106732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.463 [2024-07-15 17:08:29.106913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.463 [2024-07-15 17:08:29.106921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.463 [2024-07-15 17:08:29.106927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.463 [2024-07-15 17:08:29.109766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.463 [2024-07-15 17:08:29.119308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.463 [2024-07-15 17:08:29.119680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.463 [2024-07-15 17:08:29.119695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.463 [2024-07-15 17:08:29.119702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.463 [2024-07-15 17:08:29.119879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.463 [2024-07-15 17:08:29.120055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.463 [2024-07-15 17:08:29.120062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.463 [2024-07-15 17:08:29.120069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.463 [2024-07-15 17:08:29.122906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.722 [2024-07-15 17:08:29.132454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.722 [2024-07-15 17:08:29.132908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.722 [2024-07-15 17:08:29.132925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.722 [2024-07-15 17:08:29.132936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.722 [2024-07-15 17:08:29.133113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.722 [2024-07-15 17:08:29.133296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.722 [2024-07-15 17:08:29.133304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.722 [2024-07-15 17:08:29.133310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.722 [2024-07-15 17:08:29.136146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.722 [2024-07-15 17:08:29.145526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.722 [2024-07-15 17:08:29.145842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.722 [2024-07-15 17:08:29.145858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.722 [2024-07-15 17:08:29.145866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.722 [2024-07-15 17:08:29.146042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.722 [2024-07-15 17:08:29.146223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.722 [2024-07-15 17:08:29.146239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.722 [2024-07-15 17:08:29.146245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.722 [2024-07-15 17:08:29.149080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.722 [2024-07-15 17:08:29.158628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.722 [2024-07-15 17:08:29.158943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.722 [2024-07-15 17:08:29.158959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.723 [2024-07-15 17:08:29.158965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.723 [2024-07-15 17:08:29.159142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.723 [2024-07-15 17:08:29.159326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.723 [2024-07-15 17:08:29.159335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.723 [2024-07-15 17:08:29.159341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.723 [2024-07-15 17:08:29.162175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.723 [2024-07-15 17:08:29.171732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.723 [2024-07-15 17:08:29.172115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.723 [2024-07-15 17:08:29.172131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.723 [2024-07-15 17:08:29.172137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.723 [2024-07-15 17:08:29.172321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.723 [2024-07-15 17:08:29.172498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.723 [2024-07-15 17:08:29.172509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.723 [2024-07-15 17:08:29.172515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.723 [2024-07-15 17:08:29.175357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.723 [2024-07-15 17:08:29.184913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.723 [2024-07-15 17:08:29.185290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.723 [2024-07-15 17:08:29.185307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.723 [2024-07-15 17:08:29.185314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.723 [2024-07-15 17:08:29.185492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.723 [2024-07-15 17:08:29.185674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.723 [2024-07-15 17:08:29.185682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.723 [2024-07-15 17:08:29.185688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.723 [2024-07-15 17:08:29.188523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.723 [2024-07-15 17:08:29.198083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.723 [2024-07-15 17:08:29.198420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.723 [2024-07-15 17:08:29.198436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.723 [2024-07-15 17:08:29.198443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.723 [2024-07-15 17:08:29.198619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.723 [2024-07-15 17:08:29.198796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.723 [2024-07-15 17:08:29.198804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.723 [2024-07-15 17:08:29.198810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.723 [2024-07-15 17:08:29.201651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.723 [2024-07-15 17:08:29.211191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.723 [2024-07-15 17:08:29.211587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.723 [2024-07-15 17:08:29.211603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.723 [2024-07-15 17:08:29.211611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.723 [2024-07-15 17:08:29.211787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.723 [2024-07-15 17:08:29.211968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.723 [2024-07-15 17:08:29.211978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.723 [2024-07-15 17:08:29.211984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.723 [2024-07-15 17:08:29.214818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.723 [2024-07-15 17:08:29.224380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.723 [2024-07-15 17:08:29.224882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.723 [2024-07-15 17:08:29.224898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.723 [2024-07-15 17:08:29.224906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.723 [2024-07-15 17:08:29.225083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.723 [2024-07-15 17:08:29.225265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.723 [2024-07-15 17:08:29.225274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.723 [2024-07-15 17:08:29.225280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.723 [2024-07-15 17:08:29.228110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.723 [2024-07-15 17:08:29.237499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.723 [2024-07-15 17:08:29.237911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.723 [2024-07-15 17:08:29.237928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.723 [2024-07-15 17:08:29.237934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.723 [2024-07-15 17:08:29.238111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.723 [2024-07-15 17:08:29.238293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.723 [2024-07-15 17:08:29.238302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.723 [2024-07-15 17:08:29.238308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.723 [2024-07-15 17:08:29.241136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.723 [2024-07-15 17:08:29.250676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.723 [2024-07-15 17:08:29.251035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.723 [2024-07-15 17:08:29.251052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.723 [2024-07-15 17:08:29.251059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.723 [2024-07-15 17:08:29.251240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.723 [2024-07-15 17:08:29.251419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.723 [2024-07-15 17:08:29.251428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.723 [2024-07-15 17:08:29.251434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.723 [2024-07-15 17:08:29.254265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.723 [2024-07-15 17:08:29.263819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.723 [2024-07-15 17:08:29.264189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.723 [2024-07-15 17:08:29.264205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.723 [2024-07-15 17:08:29.264211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.723 [2024-07-15 17:08:29.264400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.723 [2024-07-15 17:08:29.264578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.723 [2024-07-15 17:08:29.264587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.723 [2024-07-15 17:08:29.264593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.723 [2024-07-15 17:08:29.267429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.723 [2024-07-15 17:08:29.276969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.723 [2024-07-15 17:08:29.277383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.723 [2024-07-15 17:08:29.277399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.723 [2024-07-15 17:08:29.277406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.723 [2024-07-15 17:08:29.277584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.723 [2024-07-15 17:08:29.277761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.723 [2024-07-15 17:08:29.277769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.723 [2024-07-15 17:08:29.277775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.723 [2024-07-15 17:08:29.280615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.723 [2024-07-15 17:08:29.290062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.723 [2024-07-15 17:08:29.290396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.723 [2024-07-15 17:08:29.290412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.723 [2024-07-15 17:08:29.290419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.724 [2024-07-15 17:08:29.290597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.724 [2024-07-15 17:08:29.290774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.724 [2024-07-15 17:08:29.290784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.724 [2024-07-15 17:08:29.290793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.724 [2024-07-15 17:08:29.293637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.724 [2024-07-15 17:08:29.303200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.724 [2024-07-15 17:08:29.303564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.724 [2024-07-15 17:08:29.303580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.724 [2024-07-15 17:08:29.303587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.724 [2024-07-15 17:08:29.303764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.724 [2024-07-15 17:08:29.303941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.724 [2024-07-15 17:08:29.303949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.724 [2024-07-15 17:08:29.303958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.724 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.724 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:22.724 17:08:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:22.724 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:22.724 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.724 [2024-07-15 17:08:29.306801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.724 [2024-07-15 17:08:29.316346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.724 [2024-07-15 17:08:29.316763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.724 [2024-07-15 17:08:29.316781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.724 [2024-07-15 17:08:29.316788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.724 [2024-07-15 17:08:29.316965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.724 [2024-07-15 17:08:29.317142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.724 [2024-07-15 17:08:29.317150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.724 [2024-07-15 17:08:29.317157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.724 [2024-07-15 17:08:29.319996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.724 [2024-07-15 17:08:29.329551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.724 [2024-07-15 17:08:29.330011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.724 [2024-07-15 17:08:29.330027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.724 [2024-07-15 17:08:29.330034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.724 [2024-07-15 17:08:29.330211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.724 [2024-07-15 17:08:29.330398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.724 [2024-07-15 17:08:29.330410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.724 [2024-07-15 17:08:29.330418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.724 [2024-07-15 17:08:29.333257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.724 17:08:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.724 17:08:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:22.724 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.724 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.724 [2024-07-15 17:08:29.342635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.724 [2024-07-15 17:08:29.343039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.724 [2024-07-15 17:08:29.343055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.724 [2024-07-15 17:08:29.343062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.724 [2024-07-15 17:08:29.343247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.724 [2024-07-15 17:08:29.343424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.724 [2024-07-15 17:08:29.343433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.724 [2024-07-15 17:08:29.343439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.724 [2024-07-15 17:08:29.346282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.724 [2024-07-15 17:08:29.346952] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.724 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.724 17:08:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:22.724 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.724 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.724 [2024-07-15 17:08:29.355824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.724 [2024-07-15 17:08:29.356196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.724 [2024-07-15 17:08:29.356213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.724 [2024-07-15 17:08:29.356220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.724 [2024-07-15 17:08:29.356402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.724 [2024-07-15 17:08:29.356579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.724 [2024-07-15 17:08:29.356588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.724 [2024-07-15 17:08:29.356594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.724 [2024-07-15 17:08:29.359430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.724 [2024-07-15 17:08:29.368975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.724 [2024-07-15 17:08:29.369328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.724 [2024-07-15 17:08:29.369344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.724 [2024-07-15 17:08:29.369351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.724 [2024-07-15 17:08:29.369528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.724 [2024-07-15 17:08:29.369706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.724 [2024-07-15 17:08:29.369714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.724 [2024-07-15 17:08:29.369720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.724 [2024-07-15 17:08:29.372560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.724 [2024-07-15 17:08:29.382129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.724 [2024-07-15 17:08:29.382507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.724 [2024-07-15 17:08:29.382523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.724 [2024-07-15 17:08:29.382531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.724 [2024-07-15 17:08:29.382712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.724 [2024-07-15 17:08:29.382889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.724 [2024-07-15 17:08:29.382897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.724 [2024-07-15 17:08:29.382903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.724 [2024-07-15 17:08:29.385742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.983 Malloc0 00:26:22.983 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.983 17:08:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:22.983 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.983 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.983 [2024-07-15 17:08:29.395297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.983 [2024-07-15 17:08:29.395677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.983 [2024-07-15 17:08:29.395693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.983 [2024-07-15 17:08:29.395701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.983 [2024-07-15 17:08:29.395878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.983 [2024-07-15 17:08:29.396054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.983 [2024-07-15 17:08:29.396062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.983 [2024-07-15 17:08:29.396069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.983 [2024-07-15 17:08:29.398910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.983 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.983 17:08:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:22.984 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.984 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.984 [2024-07-15 17:08:29.408466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.984 [2024-07-15 17:08:29.408910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.984 [2024-07-15 17:08:29.408926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x709980 with addr=10.0.0.2, port=4420 00:26:22.984 [2024-07-15 17:08:29.408933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709980 is same with the state(5) to be set 00:26:22.984 [2024-07-15 17:08:29.409110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709980 (9): Bad file descriptor 00:26:22.984 [2024-07-15 17:08:29.409291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.984 [2024-07-15 17:08:29.409299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.984 [2024-07-15 17:08:29.409305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.984 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.984 17:08:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.984 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.984 [2024-07-15 17:08:29.412138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.984 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.984 [2024-07-15 17:08:29.415079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.984 17:08:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.984 17:08:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 232231 00:26:22.984 [2024-07-15 17:08:29.421514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.984 [2024-07-15 17:08:29.578712] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:32.956 00:26:32.956 Latency(us) 00:26:32.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.956 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:32.956 Verification LBA range: start 0x0 length 0x4000 00:26:32.956 Nvme1n1 : 15.00 8011.59 31.30 12921.61 0.00 6094.76 669.61 16070.57 00:26:32.957 =================================================================================================================== 00:26:32.957 Total : 8011.59 31.30 12921.61 0.00 6094.76 669.61 16070.57 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:32.957 rmmod nvme_tcp 00:26:32.957 rmmod nvme_fabrics 00:26:32.957 rmmod nvme_keyring 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 233163 ']' 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 233163 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 233163 ']' 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 233163 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 233163 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 233163' 00:26:32.957 killing process with pid 233163 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 233163 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 233163 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:32.957 17:08:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.890 17:08:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:33.890 00:26:33.890 real 0m25.460s 00:26:33.890 user 1m2.536s 00:26:33.890 sys 0m5.822s 00:26:33.890 17:08:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:33.890 17:08:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:33.890 ************************************ 00:26:33.890 END TEST nvmf_bdevperf 00:26:33.890 ************************************ 00:26:34.149 17:08:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:34.149 17:08:40 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:34.149 17:08:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:34.149 17:08:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.149 17:08:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:34.149 ************************************ 00:26:34.149 START TEST nvmf_target_disconnect 00:26:34.149 ************************************ 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:34.149 * Looking for test storage... 00:26:34.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:34.149 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:34.150 17:08:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.417 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:39.417 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:39.418 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:39.418 Found net devices under 0000:86:00.0: cvl_0_0 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:39.418 Found net devices under 0000:86:00.1: cvl_0_1 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:39.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:26:39.418 00:26:39.418 --- 10.0.0.2 ping statistics --- 00:26:39.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.418 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:26:39.418 00:26:39.418 --- 10.0.0.1 ping statistics --- 00:26:39.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.418 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:39.418 17:08:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:39.418 ************************************ 00:26:39.418 START TEST nvmf_target_disconnect_tc1 00:26:39.418 ************************************ 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:39.418 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:39.678 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.678 [2024-07-15 17:08:46.128613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.678 [2024-07-15 17:08:46.128661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2499e60 with addr=10.0.0.2, port=4420 00:26:39.678 [2024-07-15 17:08:46.128681] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:39.678 [2024-07-15 17:08:46.128693] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:39.678 [2024-07-15 17:08:46.128700] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:39.678 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:39.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:39.678 Initializing NVMe Controllers 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:39.678 00:26:39.678 real 0m0.097s 00:26:39.678 user 0m0.039s 00:26:39.678 sys 0m0.058s 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:39.678 ************************************ 00:26:39.678 END TEST nvmf_target_disconnect_tc1 00:26:39.678 ************************************ 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:39.678 ************************************ 00:26:39.678 START TEST nvmf_target_disconnect_tc2 00:26:39.678 ************************************ 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=238224 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 238224 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 238224 ']' 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:39.678 17:08:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.678 [2024-07-15 17:08:46.265733] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:26:39.678 [2024-07-15 17:08:46.265772] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.678 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.678 [2024-07-15 17:08:46.336988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:39.937 [2024-07-15 17:08:46.410855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.937 [2024-07-15 17:08:46.410898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.937 [2024-07-15 17:08:46.410905] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.937 [2024-07-15 17:08:46.410911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.937 [2024-07-15 17:08:46.410916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.937 [2024-07-15 17:08:46.411031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:39.937 [2024-07-15 17:08:46.411141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:39.937 [2024-07-15 17:08:46.411269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:39.937 [2024-07-15 17:08:46.411269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:40.503 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:40.503 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:40.503 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:40.503 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:40.503 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.503 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.503 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:40.503 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.503 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.503 Malloc0 00:26:40.503 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.503 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.504 [2024-07-15 17:08:47.129082] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.504 [2024-07-15 17:08:47.157322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=238355 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:40.504 17:08:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:40.762 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.672 17:08:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 238224 00:26:42.672 17:08:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 [2024-07-15 17:08:49.185008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 [2024-07-15 17:08:49.185218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Write completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.672 starting I/O failed 00:26:42.672 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 [2024-07-15 17:08:49.185415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Read completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 Write completed with error (sct=0, sc=8) 00:26:42.673 starting I/O failed 00:26:42.673 [2024-07-15 17:08:49.185609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:42.673 [2024-07-15 17:08:49.185823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.185840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.186022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.186031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.186260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.186271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.186387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.186396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.186574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.186584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.186705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.186715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.186967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.186996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.187207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.187246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.187463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.187492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.187664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.187693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.187963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.187993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.188347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.188393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.188577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.188609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.188845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.188856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.189020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.189030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.189154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.189164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.189295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.189305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.189432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.189443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.189558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.189568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.189688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.189699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.189870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.189880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.190081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.190091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.190293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.190303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.190514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.190524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.673 [2024-07-15 17:08:49.190693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.673 [2024-07-15 17:08:49.190703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.673 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.190891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.190902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.191071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.191081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.191265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.191277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.191505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.191535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.191706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.191735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.192139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.192176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.192366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.192377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.192537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.192547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.192664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.192674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.192854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.192863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.193135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.193145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.193395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.193405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.193606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.193616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.193988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.194017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.194237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.194267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.194564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.194594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.194768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.194797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.195001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.195030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.195220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.195236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.195452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.195462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.195661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.195671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.195930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.195940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.196107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.196117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.196229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.196240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.196350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.196359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.196535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.196545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.196671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.196683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.196855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.196865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.197023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.197034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.197274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.197285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.197533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.197543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.197714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.197724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.197852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.197862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.198055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.198065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.198316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.198326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.198495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.198505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.198631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.198641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.198811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.198821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.198995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.199005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.199168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.674 [2024-07-15 17:08:49.199178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.674 qpair failed and we were unable to recover it. 00:26:42.674 [2024-07-15 17:08:49.199397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.199407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.199517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.199527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.199717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.199727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.199909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.199919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.200098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.200128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.200397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.200427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.200598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.200627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.200848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.200877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.201079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.201108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.201373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.201383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.201567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.201577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.201772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.201782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.202073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.202102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.202368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.202398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.202564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.202594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.202879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.202908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.203104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.203134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.203356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.203367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.203541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.203551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.203679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.203689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.203877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.203887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.204180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.204190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.204315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.204325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.204497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.204507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.204702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.204712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.204823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.204832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.205078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.205089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.205258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.205268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.205386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.205397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.205525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.205535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.205660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.205670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.205856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.205866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.206025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.206035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.206278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.206289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.206472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.206482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.206693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.206703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.206850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.206860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.206975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.206985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.207166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.207176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.207393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.207403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.675 [2024-07-15 17:08:49.207519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.675 [2024-07-15 17:08:49.207529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.675 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.207648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.207658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.207811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.207821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.208066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.208075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.208194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.208205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.208401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.208411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.208589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.208598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.208774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.208784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.209069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.209079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.209181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.209190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.209355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.209365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.209538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.209548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.209723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.209752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.209987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.210017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.210236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.210267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.210499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.210528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.210747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.210776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.211067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.211096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.211305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.211337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.211510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.211520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.211727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.211757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.212093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.212122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.212411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.212421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.212528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.212538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.212708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.212718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.212932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.212942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.213066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.213077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.213179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.213188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.213371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.213382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.213585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.213595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.213778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.213788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.214010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.214020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.214203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.214214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.214465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.676 [2024-07-15 17:08:49.214476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.676 qpair failed and we were unable to recover it. 00:26:42.676 [2024-07-15 17:08:49.214701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.214711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.214892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.214902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.215082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.215111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.215327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.215357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.215494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.215523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.215735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.215764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.216013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.216023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.216116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.216126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.216292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.216302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.216522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.216533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.216757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.216767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.217070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.217080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.217294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.217305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.217466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.217476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.217717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.217746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.217971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.218001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.218216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.218253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.218486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.218516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.218727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.218756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.219002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.219013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.219258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.219269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.219497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.219507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.219729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.219739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.220042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.220051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.220178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.220188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.220407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.220418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.220578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.220588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.220708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.220718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.220887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.220897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.221214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.221227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.221495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.221506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.221616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.221626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.221762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.221774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.222016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.222026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.222286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.222296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.222538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.222548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.222719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.222730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.222899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.222909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.223075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.223086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.223309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.223320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.223521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.677 [2024-07-15 17:08:49.223550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.677 qpair failed and we were unable to recover it. 00:26:42.677 [2024-07-15 17:08:49.223770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.223800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.224072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.224102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.224356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.224386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.224603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.224632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.224919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.224949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.225173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.225184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.225343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.225354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.225528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.225537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.225654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.225664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.225774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.225784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.226010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.226020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.226217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.226230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.226453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.226463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.226702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.226712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.227050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.227060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.227324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.227335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.227530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.227540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.227708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.227718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.227827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.227837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.227961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.227971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.228145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.228155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.228377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.228388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.228593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.228603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.228714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.228722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.228852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.228861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.229038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.229048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.229332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.229363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.229580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.229609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.229880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.229909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.230120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.230150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.230427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.230457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.230631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.230665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.230884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.230914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.231081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.231090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.231259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.231269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.231424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.231456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.231671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.231701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.231923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.231951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.232163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.232192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.232365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.232395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.232553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 17:08:49.232583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 17:08:49.232734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.232763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.232909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.232937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.233091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.233120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.233382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.233413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.233630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.233660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.233819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.233849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.234145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.234175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.234425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.234456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.234622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.234651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.234927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.234956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.235114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.235142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.235351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.235382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.235519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.235529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.235688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.235697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.235859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.235870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.235985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.235994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.236191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.236201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.236407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.236418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.236598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.236630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.236844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.236874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.237190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.237221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.237521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.237552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.237773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.237805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.238039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.238067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.238289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.238300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.238423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.238433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.238549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.238558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.238746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.238756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.238878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.238888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.239058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.239068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.239258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.239270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.239446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.239456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.239615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.239625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.239780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.239790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.239973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.239984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.240106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.240116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.240312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.240323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.240454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.240464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.240594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.240603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.240722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.240732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 17:08:49.241067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 17:08:49.241077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.241186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.241195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.241390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.241400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.241504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.241513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.241739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.241749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.241956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.241966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.242221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.242236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.242359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.242369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.242595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.242605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.242878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.242888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.243134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.243144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.243303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.243313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.243444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.243455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.243566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.243576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.243727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.243737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.243853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.243863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.244057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.244067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.244244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.244255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.244390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.244400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.244501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.244509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.244623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.244632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.244910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.244919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.245023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.245033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.245202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.245212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.245373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.245383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.245504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.245514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.245678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.245689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.245809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.245819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.246085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.246095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.246256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.246267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.246460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.246472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.246582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.246592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.246702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.246712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.246837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.246847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.247071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.247081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.247233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 17:08:49.247244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 17:08:49.247420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.247430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.247543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.247554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.247738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.247748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.247890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.247900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.248145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.248155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.248265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.248276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.248409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.248419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.248618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.248628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.248790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.248800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.248975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.248986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.249188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.249200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.249451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.249463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.249630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.249640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.249764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.249774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.250068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.250078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.250204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.250214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.250415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.250424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.250604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.250614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.250723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.250733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.250825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.250834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.251092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.251103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.251375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.251389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.251515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.251525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.251644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.251654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.251776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.251786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.251951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.251961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.252223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.252240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.252380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.252391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.252559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.252569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.252695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.252705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.252942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.252952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.253203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.253212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.253374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.253384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.253496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 17:08:49.253506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 17:08:49.253624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.253636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.253828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.253838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.254033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.254045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.254194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.254204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.254332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.254343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.254512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.254522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.254678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.254688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.254849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.254860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.255053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.255063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.255243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.255254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.255424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.255435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.255534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.255543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.255652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.255662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.255776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.255787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.255910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.255921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.256021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.256033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.256190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.256199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.256395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.256406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.256573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.256583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.256696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.256706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.256866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.256876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.256980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.256989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.257080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.257089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.257262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.257272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.257446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.257455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.257656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.257666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.257771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.257781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.257945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.257955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.258118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.258128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.258307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.258317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.258424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.258434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.258553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.258562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.258674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.258684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.258944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.258954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.259130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.259140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.259334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.259345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.259461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.259471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.259599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.259609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.259734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.259743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.259851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.259861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 17:08:49.260083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 17:08:49.260094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.260268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.260278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.260396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.260406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.260511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.260521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.260687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.260697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.260811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.260821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.261044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.261054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.261170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.261180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.261348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.261358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.261483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.261493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.261611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.261620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.261844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.261854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.262158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.262168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.262341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.262351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.262485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.262494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.262768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.262777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.263001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.263010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.263216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.263229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.263463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.263471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.263660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.263668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.263780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.263788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.264050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.264058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.264220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.264234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.264331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.264341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.264470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.264479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.264649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.264659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.264818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.264827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.265024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.265033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.265210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.265219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.265384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.265393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.265550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.265559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.265731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.265740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.265844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.265852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.265944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.265953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.266140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.266149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.266278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.266288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.266422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.266431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.266535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.266543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.266709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.266718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.266828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.266837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.267090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.267100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 17:08:49.267359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 17:08:49.267368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.267559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.267567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.267736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.267745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.267917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.267926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.268153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.268162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.268354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.268363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.268635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.268644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.268938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.268946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.269185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.269194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.269431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.269443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.269567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.269575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.269761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.269770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.270033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.270041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.270244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.270254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.270444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.270453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.270632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.270641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.270749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.270758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.271059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.271067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.271178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.271186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.271410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.271419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.271539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.271548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.271717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.271725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.271890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.271899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.272126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.272135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.272320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.272328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.272528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.272539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.272708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.272717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.272942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.272951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.273174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.273183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.273354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.273364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.273458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.273467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.273641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.273650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.273806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.273815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.273921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.273930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.274156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.274164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.274342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.274351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.274467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.274475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.274634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.274642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.274829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.274838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.275007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.275018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.275144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 17:08:49.275152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 17:08:49.275319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.275330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.275556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.275565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.275678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.275686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.275904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.275914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.276104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.276113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.276308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.276317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.276508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.276517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.276753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.276761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.276972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.276980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.277138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.277147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.277280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.277289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.277409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.277418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.277598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.277607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.277737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.277746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.277864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.277873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.278038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.278047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.278220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.278234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.278352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.278361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.278525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.278533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.278647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.278656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.278774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.278783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.278895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.278903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.279022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.279030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.279259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.279268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.279502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.279511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.279760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.279792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.280003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.280017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.280288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.280301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.280477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.280491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.280625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.280637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.280822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.280835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.281036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.281046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.281221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.281235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.281348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.281356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 17:08:49.281487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 17:08:49.281496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.281613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.281622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.281755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.281763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.281877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.281885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.282085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.282095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.282198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.282207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.282387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.282396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.282574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.282583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.282700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.282709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.282811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.282819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.283095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.283103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.283279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.283290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.283426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.283435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.283561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.283570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.283697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.283705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.283807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.283815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.283990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.283999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.284115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.284125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.284367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.284376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.284598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.284607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.284734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.284744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.284853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.284862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.285059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.285068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.285234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.285245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.285423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.285432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.285558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.285567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.285742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.285751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.285936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.285945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.286063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.286072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.286318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.286327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.286461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.286470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.286590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.286600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.286760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.286768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.286888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.286897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.287160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.287169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.287389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.287398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 17:08:49.287517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 17:08:49.287528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.630522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.630546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.630690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.630701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.630860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.630870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.631067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.631077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.631193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.631202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.631425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.631435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.631637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.631647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.631820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.631829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.632029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.632039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.632151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.632160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.632295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.632305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.632427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.632436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.632608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.632617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.632793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.632803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 17:08:49.632896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 17:08:49.632906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.633028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.633057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.633198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.633249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.633461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.633490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.633719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.633749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.634013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.634043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.634269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.634279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.634440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.634451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.634697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.634726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.634981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.635010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.635215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.635259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.635453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.635484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.635753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.635781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.635990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.636000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.636221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.636235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.636489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.636499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.636579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.636589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.636705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.636734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.636889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.636918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.637136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.637166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.637362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.637374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.637568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.637578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.637757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.637767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.637954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.637963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.638140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.638150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.638262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.638272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.638427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.638437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.638628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.638657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.638816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.638846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.639160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.639169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.639292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.639302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.639407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.639416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.639644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.639653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.639715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.639724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.639885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.639895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.640061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.640071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.640316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.640346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.640499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.640528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.640740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.640769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.641075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.641104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 17:08:49.641246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 17:08:49.641282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.641533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.641542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.641761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.641771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.641946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.641955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.642204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.642214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.642442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.642451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.642638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.642648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.642823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.642832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.642935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.642944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.643182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.643191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.643390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.643400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.643574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.643584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.643759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.643769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.643866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.643875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.644057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.644085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.644353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.644383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.644542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.644571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.644779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.644807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.644952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.644980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.645188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.645197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.645369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.645381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.645539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.645549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.645724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.645752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.645950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.645979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.646186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.646215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.646432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.646462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.646657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.646685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.646898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.646927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.647166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.647195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.647418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.647427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.647537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.647566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.647855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.647884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.648170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.648199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.648419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.648489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.648713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.648777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.649022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.649055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.649296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.649331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.649497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.649510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.649762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.649775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.649957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.649970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.650066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 17:08:49.650094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 17:08:49.650351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.650384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.650608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.650637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.650905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.650945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.651135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.651148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.651272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.651287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.651526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.651555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.651760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.651798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.651994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.652007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.652135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.652177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.652322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.652354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.652563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.652592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.652712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.652741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.653025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.653055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.653219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.653269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.653372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.653385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.653635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.653664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.653899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.653928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.654190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.654220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.654396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.654425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.654662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.654691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.654931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.654961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.655174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.655204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.655489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.655503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.655635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.655648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.655754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.655767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.655966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.655980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.656154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.656167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.656353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.656368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.656556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.656586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.656803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.656831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.657039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.657069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.657215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.657259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.657523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.657574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.657758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.657802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.658010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.658041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.658188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.658229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.658423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.658437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.658708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.658721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.658900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.658930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.659143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.659173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.659384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.659423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 17:08:49.659601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 17:08:49.659615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 17:08:49.659730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 17:08:49.659743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 17:08:49.659927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 17:08:49.659941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 17:08:49.660035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 17:08:49.660048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 17:08:49.660308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 17:08:49.660339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 17:08:49.660535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 17:08:49.660564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 17:08:49.660793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 17:08:49.660823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 17:08:49.661035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 17:08:49.661064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 17:08:49.661331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 17:08:49.661345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.661522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.661536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.661715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.661729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.661825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.661838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.661960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.661974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.662148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.662162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.662338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.662352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.662612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.662626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.662873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.662886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.663054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.663068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.663255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.663269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.663482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.663516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.663731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.663760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.663912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.663941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.664240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.664270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.664543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.664573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.664712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.664741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.665024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.665053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.665247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.665278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.665526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.665536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.665701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.665710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.665817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.665826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.666054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.666063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.666290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.666300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.666490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.666500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.666669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.666698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.666925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.666954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.667107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.276 [2024-07-15 17:08:49.667137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.276 qpair failed and we were unable to recover it. 00:26:43.276 [2024-07-15 17:08:49.667247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.667257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.667502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.667512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.667655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.667665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.667755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.667764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.667857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.667866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.667983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.667992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.668219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.668255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.668404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.668433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.668566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.668596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.668862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.668892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.669052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.669082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.669286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.669316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.669602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.669612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.669781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.669790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.669899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.669909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.670129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.670139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.670423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.670433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.670547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.670557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.670666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.670676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.670847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.670857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.671032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.671061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.671269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.671300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.671497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.671526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.671682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.671716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.671980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.672009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.672244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.672274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.672408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.672417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.672650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.672679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.672835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.672864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.673134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.673163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.673377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.673387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.673607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.673616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.673774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.673801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.674005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.674033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.674243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.674273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.674481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.674491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.674736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.674765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.675002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.675031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.675309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.675340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.675607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.675636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.277 qpair failed and we were unable to recover it. 00:26:43.277 [2024-07-15 17:08:49.675898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.277 [2024-07-15 17:08:49.675927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.676076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.676105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.676233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.676243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.676474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.676503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.676744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.676774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.676977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.677007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.677157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.677186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.677339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.677368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.677544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.677554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.677806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.677835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.678070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.678100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.678365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.678395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.678520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.678549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.678750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.678780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.678944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.678973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.679178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.679188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.679343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.679353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.679419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.679428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.679621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.679632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.679737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.679746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.679936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.679946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.680100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.680110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.680359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.680389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.680632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.680671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.680894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.680924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.681135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.681163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.681430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.681461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.681660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.681689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.681979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.682007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.682164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.682174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.682331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.682341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.682459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.682469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.682668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.682678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.682849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.682858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.683016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.683026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.683197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.683207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.683384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.683414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.683661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.683691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.683903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 17:08:49.683931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 17:08:49.684150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.684179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.684339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.684368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.684579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.684609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.684833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.684862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.684995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.685024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.685245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.685275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.685556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.685566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.685754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.685763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.685886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.685896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.686004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.686015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.686251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.686281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.686436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.686464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.686596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.686624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.686916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.686945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.687216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.687264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.687466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.687476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.687739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.687748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.687939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.687949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.688125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.688134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.688366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.688397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.688679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.688707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.688864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.688892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.689157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.689187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.689389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.689399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.689577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.689588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.689761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.689791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.689991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.690019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.690288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.690319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.690458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.690486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.690632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.690660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.690935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.690963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.691127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.691137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.691389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.691419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.691571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.691600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.691812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.691840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.691997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.692026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.692248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.692279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.692426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.692472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.692653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.692673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.692829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.692839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 17:08:49.692998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 17:08:49.693008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.693242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.693272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.693511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.693540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.693739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.693767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.694031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.694060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.694275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.694286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.694403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.694413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.694531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.694541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.694700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.694709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.694823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.694833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.694943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.694953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.695069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.695080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.695264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.695274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.695429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.695438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.695663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.695673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.695894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.695903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.696090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.696101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.696203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.696213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.696305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.696315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.696484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.696493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.696647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.696657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.696830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.696840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.696962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.696972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.697142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.697152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.697273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.697286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.697443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.697453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.697666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.697676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.697879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.697909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.698124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.698152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.698420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.698451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.698651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.698661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.698818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.698845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.699007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.699035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.699260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 17:08:49.699292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 17:08:49.699508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.699538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.699669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.699698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.699913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.699942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.700105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.700133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.700282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.700292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.700514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.700525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.700681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.700690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.700813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.700823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.700987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.700997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.701101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.701111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.701209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.701220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.701379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.701388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.701561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.701571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.701741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.701770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.701962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.701992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.702125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.702153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.702364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.702374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.702550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.702590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.702739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.702768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.703055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.703084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.703301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.703311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.703534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.703544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.703724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.703733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.703907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.703936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.704145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.704175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.704387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.704418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.704558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.704567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.704777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.704805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.704951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.704981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.705191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.705220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.705448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.705483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.705628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.705655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.705874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.705902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.706154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.706163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.706282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.706293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.706480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.706490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.706602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.706611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.706787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.706798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.706903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.706913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.707086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.707096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.707256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.707266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.707376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.707385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.707538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 17:08:49.707548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 17:08:49.707781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.707810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.708030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.708060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.708260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.708289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.708406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.708416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.708523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.708533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.708786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.708795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.708951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.708961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.709123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.709133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.709318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.709328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.709522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.709531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.709631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.709642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.709821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.709831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.710055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.710064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.710222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.710237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.710425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.710455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.710760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.710788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.711077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.711106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.711256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.711286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.711388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.711397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.711502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.711512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.711684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.711693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.711916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.711926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.712064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.712074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.712190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.712200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.712368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.712399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.712550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.712579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.712799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.712828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.712989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.713023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.713319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.713349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.713584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.713613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.713817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.713846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.713990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.714018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.714163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.714191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.714391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.714401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.714489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.714498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.714670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.714679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.714911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.714940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.715209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.715249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.715482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.715511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.715722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.715732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.715904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.715914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.716176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.716204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.716364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 17:08:49.716394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 17:08:49.716672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.716700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.716914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.716942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.717209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.717249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.717567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.717595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.717884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.717913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.718074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.718102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.718245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.718275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.718564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.718593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.718808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.718837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.718971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.719000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.719269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.719298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.719597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.719627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.719913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.719943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.720142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.720170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.720438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.720448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.720608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.720618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.720783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.720793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.720950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.720959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.721129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.721138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.721316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.721346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.721477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.721507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.721821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.721850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.722061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.722090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.722287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.722317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.722535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.722573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.722714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.722743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.722990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.723019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.723200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.723209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.723445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.723476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.723695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.723724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.723943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.723973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.724264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.724294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.724510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.724539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.724717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.724746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.724956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.724984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.725249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.725279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.725494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.725522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.725668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.725697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.725850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.725880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.726069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.726098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.726396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.726407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.726477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 17:08:49.726486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 17:08:49.726603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.726612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.726802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.726812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.726975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.726985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.727163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.727172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.727316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.727327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.727444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.727453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.727548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.727557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.727724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.727733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.727840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.727850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.728082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.728092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.728191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.728201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.728287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.728296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.728453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.728463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.728565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.728575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.728751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.728761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.728947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.728958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.729061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.729071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.729186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.729195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.729358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.729368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.729465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.729475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.729643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.729653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.729835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.729844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.730002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.730013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.730244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.730274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.730490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.730519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.730737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.730766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.731033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.731062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.731327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.731357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.731643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.731672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.731823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.731852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.732050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.732078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.732279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.732309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.732523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.732552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.732762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 17:08:49.732791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 17:08:49.733024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.733053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.733316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.733327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.733581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.733591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.733746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.733755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.733980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.734021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.734294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.734324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.734490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.734527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.734632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.734642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.734888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.734897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.735146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.735156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.735273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.735282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.735469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.735479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.735643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.735672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.735815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.735843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.736052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.736080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.736313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.736343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.736552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.736581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.736714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.736744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.736946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.736975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.737180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.737209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.737433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.737462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.737676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.737706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.737999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.738028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.738291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.738321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.738587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.738623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.738846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.738856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.739115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.739124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.739353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.739363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.739537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.739548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.739665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.739674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.739787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.739797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.739911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.739921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.740088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.740099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.740275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.740306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.740511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.740541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.740753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.740780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.741060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.741090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.741380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.741390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.741620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.741629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.741852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.741862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.741987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.741996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.742163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.742173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.742341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 17:08:49.742351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 17:08:49.742457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.742466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.742638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.742668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.742865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.742893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.743030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.743060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.743267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.743298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.743506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.743535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.743817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.743846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.744036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.744065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.744354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.744365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.744478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.744488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.744654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.744664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.744836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.744846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.745044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.745074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.745342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.745372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.745533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.745561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.745757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.745766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.745892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.745920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.746209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.746245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.746508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.746537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.746809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.746838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.747046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.747075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.747289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.747319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.747470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.747498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.747692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.747702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.747901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.747929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.748124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.748157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.748371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.748401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.748652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.748662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.748755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.748764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.748931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.748941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.749031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.749041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.749207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.749216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.749399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.749410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.749583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.749612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.749827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.749856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.750065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.750093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.750331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.750341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.750436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.750445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.750542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.750552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.750658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.750668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 17:08:49.750785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 17:08:49.750795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.750956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.750965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.751185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.751195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.751369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.751379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.751489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.751499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.751664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.751674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.751767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.751778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.751957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.751967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.752085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.752095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.752263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.752273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.752391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.752401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.752560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.752570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.752762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.752772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.752930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.752940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.753046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.753055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.753209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.753219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.753386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.753426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.753622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.753651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.753814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.753842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.753998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.754026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.754234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.754264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.754547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.754577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.754886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.754914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.755127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.755156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.755373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.755404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.755573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.755585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.755738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.755747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.755974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.756003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.756201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.756238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.756508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.756518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.756714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.756724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.756921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.756931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.757087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.757097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.757303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.757333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.757530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.757559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.757707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.757736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.758024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.758053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.758267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.758296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.758431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.758459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.758639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.758648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.758883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.758912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.759059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 17:08:49.759088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 17:08:49.759297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.759328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.759533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.759542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.759716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.759726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.759961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.759990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.760138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.760166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.760435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.760467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.760680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.760690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.760867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.760877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.761046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.761055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.761304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.761343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.761615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.761684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.761973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.762007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.762210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.762274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.762491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.762522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.762719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.762749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.763036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.763065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.763281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.763311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.763527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.763557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.763797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.763810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.763985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.763998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.764252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.764283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.764573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.764602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.764906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.764936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.765150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.765187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.765460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.765490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.765775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.765789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.766020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.766034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.766262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.766276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.766444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.766457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.766647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.766677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.766891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.766921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.767095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.767124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.767345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.767359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.767534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.767547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.767779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.767809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.768078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.768107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.768314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.768358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.768634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.768647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.768823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 17:08:49.768836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 17:08:49.769009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.769022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.769213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.769230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.769481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.769511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.769666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.769696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.769912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.769941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.770140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.770169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.770382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.770396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.770514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.770527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.770693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.770706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.770825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.770839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.770950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.770964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.771177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.771189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.771349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.771359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.771518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.771528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.771641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.771651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.771842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.771852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.772067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.772077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.772278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.772308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.772513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.772541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.772765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.772794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.773005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.773033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.773247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.773256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.773452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.773462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.773570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.773580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.773691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.773700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.773879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.773889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.774058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.774068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.774255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.774285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.774448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.774477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.774633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 17:08:49.774662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 17:08:49.774814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.774841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.774962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.774990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.775191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.775220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.775361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.775391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.775652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.775680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.775879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.775908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.776057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.776085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.776288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.776318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.776525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.776555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.776703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.776731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.776963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.776991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.777283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.777313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.777460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.777488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.777692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.777721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.777930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.777959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.778179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.778208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.778474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.778504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.778739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.778768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.779046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.779075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.779243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.779273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.779573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.779602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.779764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.779798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.780087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.780116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.780382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.780413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.780635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.780664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.780863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.780893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.781116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.781146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.781363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.781393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.781541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.781570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.781784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.781813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.782013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.782042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.782249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.782279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.782546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.782556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.782710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.782720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.782947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.782976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.783246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.783277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.783495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.783525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.783687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.783715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.783919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.783948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 17:08:49.784221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 17:08:49.784261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.784529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.784559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.784726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.784756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.785060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.785089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.785299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.785330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.785587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.785597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.785720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.785730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.785991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.786020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.786215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.786254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.786406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.786435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.786710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.786720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.786837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.786847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.787034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.787063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.787275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.787305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.787462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.787491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.787709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.787719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.787884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.787894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.788162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.788191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.788346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.788356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.788521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.788530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.788710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.788738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.788971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.789000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.789214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.789270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.789508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.789538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.789684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.789693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.789852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.789862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.790088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.790098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.790322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.790333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.790509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.790538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.790749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.790779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.791010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.791039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.791191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.791221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.791364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.791393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.791613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.791651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.791823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.791833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.792097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.792125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.792276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.792306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.792565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.792594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.792791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.792821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.792971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.793000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.793207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.793258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.793404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.793433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.793596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.793625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.793887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.793897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.794082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.794092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.794330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 17:08:49.794360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 17:08:49.794554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.794564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.794743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.794772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.794985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.795014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.795163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.795193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.795411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.795441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.795591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.795620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.795817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.795827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.795955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.795966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.796138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.796148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.796307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.796317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.796441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.796451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.796684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.796712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.796854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.796883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.797108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.797137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.797370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.797400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.797677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.797706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.797920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.797954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.798122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.798152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.798379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.798389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.798515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.798525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.798778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.798807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.799008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.799037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.799186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.799215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.799433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.799443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.799570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.799580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.799787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.799797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.800004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.800014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.800181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.800191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.800299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.800309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.800413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.800423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.800601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.800611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.800792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.800821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.800957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.800986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.801207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.801256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.801535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.801564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.801834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.801844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.801958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.801968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.802077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.802087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.802246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.802256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.802361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.802371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.802540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.802550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 17:08:49.802697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 17:08:49.802707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.802861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.802871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.803094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.803104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.803275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.803285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.803496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.803526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.803821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.803851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.804139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.804168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.804311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.804341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.804552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.804581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.804841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.804850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.804968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.804978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.805102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.805111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.805288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.805299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.805391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.805403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.805523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.805533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.805625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.805636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.805792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.805802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.806060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.806090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.806302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.806333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.806476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.806505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.806699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.806709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.806824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.806834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.807068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.807097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.807332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.807362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.807482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.807511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.807739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.807748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.807971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.807981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.808203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.808213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.808373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.808383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.808575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.808604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.808759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.808788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.809005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.809035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.809179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.809208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.809429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 17:08:49.809459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 17:08:49.809611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.809640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.809844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.809873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.810165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.810194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.810347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.810377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.810642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.810671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.810870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.810899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.811121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.811149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.811438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.811469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.811673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.811683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.811932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.811942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.812108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.812137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.812438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.812468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.812734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.812764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.812926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.812956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.813153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.813181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.813346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.813384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.813632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.813642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.813816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.813826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.813946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.813955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.814080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.814090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.814265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.814275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.814456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.814468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.814639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.814649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.814754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.814764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.814942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.814951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.815059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.815069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.815232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.815242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.815428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.815438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.815562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.815571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.815752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.815762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.815856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.815866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.816062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.816071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.816252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.816264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.816452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.816482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.816774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.816802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.817031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.817061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.817275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.817304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.817532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.817561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.817765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.817774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.817885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.817895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.818083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.818092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.818205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.818215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.818332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 17:08:49.818342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 17:08:49.818481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.818491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.818727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.818756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.818955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.818984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.819132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.819161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.819310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.819340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.819501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.819530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.819807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.819836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.820044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.820075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.820296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.820326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.820464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.820474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.820570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.820579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.820809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.820818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.820972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.820982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.821149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.821159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.821283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.821293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.821459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.821469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.821662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.821691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.821822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.821852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.822008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.822042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.822244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.822275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.822436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.822465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.822614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.822643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.822841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.822870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.823077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.823106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.823328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.823358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.823572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.823601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.823806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.823816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.824086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.824115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.824333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.824363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.824600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.824629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.824829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.824858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.825052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.825082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.825250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.825280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.825494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.825523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.825794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.825823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.826023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.826053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.826241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.826272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.826471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.826481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.826709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.826738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 17:08:49.826957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 17:08:49.826987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.827196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.827245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.827467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.827497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.827665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.827694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.827877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.827886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.828083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.828112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.828280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.828311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.828599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.828629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.828806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.828816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.828937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.828947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.829135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.829145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.829237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.829246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.829473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.829483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.829646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.829656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.829909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.829938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.830096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.830125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.830262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.830292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.830511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.830541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.830803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.830832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.831090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.831125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.831414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.831445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.831661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.831671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.831862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.831872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.832032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.832042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.832196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.832206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.832458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.832478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.832569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.832578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.832745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.832755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.832861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.832871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.832972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.832982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.833136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.833146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.833272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.833282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.833506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.833516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.833674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.833684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.833839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.833848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.834053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.834082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.834298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.834328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.834541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.834571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.834734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.834763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.834960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.834989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.835138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.835167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.835320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.835351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 17:08:49.835506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 17:08:49.835536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.835800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.835810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.836030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.836040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.836200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.836209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.836436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.836447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.836550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.836560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.836736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.836745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.836865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.836875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.837033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.837043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.837203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.837212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.837314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.837324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.837434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.837444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.837547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.837557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.837718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.837728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.837805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.837814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.838041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.838051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.838147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.838157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.838258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.838270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.838498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.838508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.838675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.838685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.838859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.838869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.839061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.839091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.839268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.839300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.839605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.839635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.839941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.839970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.840171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.840200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.840440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.840470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.840641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.840669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.840853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.840863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.840967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.840977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.841169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.841178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.841338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.841349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.841526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.841536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.841743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 17:08:49.841772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 17:08:49.841993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.842022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.842246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.842276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.842496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.842526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.842745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.842775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.843042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.843071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.843193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.843222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.843511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.843541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.843756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.843777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.843949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.843959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.844125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.844135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.844391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.844401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.844566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.844576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.844682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.844691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.844868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.844878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.845047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.845076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.845297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.845327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.845490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.845519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.845663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.845691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.845899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.845928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.846146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.846175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.846406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.846436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.846653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.846682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.846895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.846904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.847080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.847091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.847267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.847277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.847408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.847437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.847582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.847611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.847826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.847855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.847978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.847988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.848164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.848173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.848428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.848438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.848607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.848617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.848773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.848782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.848947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.848957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.849064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.849074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.849247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.849256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.849445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.849455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.849614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.849624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.849728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.849738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.849986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.850006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.850167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.850177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.850295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 17:08:49.850306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 17:08:49.850459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.850468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.850647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.850676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.850823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.850852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.851011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.851039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.851261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.851292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.851428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.851456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.851662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.851692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.851957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.851986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.852193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.852223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.852432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.852462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.852663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.852673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.852838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.852868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.853083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.853112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.853378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.853409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.853647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.853676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.853885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.853895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.854133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.854142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.854252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.854262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.854535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.854545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.854770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.854779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.854935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.854944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.855200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.855253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.855473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.855501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.855732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.855761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.855911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.855939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.856091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.856119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.856359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.856390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.856598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.856627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.856830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.856839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.856939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.856948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.857012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.857021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.857152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.857162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.857271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.857282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.857508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.857519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.857621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.857630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.857808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.857818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.858012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.858022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.858192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.858202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.858307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.858317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.858492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.858503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.858684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.858694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 17:08:49.858854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 17:08:49.858864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.858984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.859014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.859239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.859283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.859522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.859551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.859687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.859715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.859915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.859943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.860147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.860175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.860353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.860383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.860514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.860544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.860695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.860722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.860922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.860932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.861107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.861117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.861286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.861296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.861363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.861372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.861530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.861540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.861649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.861659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.861821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.861831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.862079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.862108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.862318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.862349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.862549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.862559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.862756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.862790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.863004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.863032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.863190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.863218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.863397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.863427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.863619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.863629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.863797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.863827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.864091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.864120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.864357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.864388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.864697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.864707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.864931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.864941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.865029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.865038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.865200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.865210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.865294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.865304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.865418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.865429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.865545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.865555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.865729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.865739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.865920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.865950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.866108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.866137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.866404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.866434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.866622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 17:08:49.866632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 17:08:49.866791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.866801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.866919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.866929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.867123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.867133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.867238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.867248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.867470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.867480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.867706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.867716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.867876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.867886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.868069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.868100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.868315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.868345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.868548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.868578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.868792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.868802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.868987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.868997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.869264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.869294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.869495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.869524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.869672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.869702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.869838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.869848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.870102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.870131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.870369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.870399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.870612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.870642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.870787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.870816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.871008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.871043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.871261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.871290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.871584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.871614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.871781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.871810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.872040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.872050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.872273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.872283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.872530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.872540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.872662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.872672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.872829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.872839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.873027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.873037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.873263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.873273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.873457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.873467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.873647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.873676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.873984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.874013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.874257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.874289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 17:08:49.874555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 17:08:49.874584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.874849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.874878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.875146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.875175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.875345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.875375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.875589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.875619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.875837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.875866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.876131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.876160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.876316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.876347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.876606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.876636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.876782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.876811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.877032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.877041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.877197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.877207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.877369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.877379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.877479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.877489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.877693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.877703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.877819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.877829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.877988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.877998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.878092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.878101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.878375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.878385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.878555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.878565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.878686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.878696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.878880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.878889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.878956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.878966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.879155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.879164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.879323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.879333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.879517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.879527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.879651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.879661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.879772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.879783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.879958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.879968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.880056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.880065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.880292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.880303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.880410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.880420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.880525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.880535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.880693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.880703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.880856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.880865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.880971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.880980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.881092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.881102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.881198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.881208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.881320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.881331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.881489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.881499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.881601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.881611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.881773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.881784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 17:08:49.882034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 17:08:49.882044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.882230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.882240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.882331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.882341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.882513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.882522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.882710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.882719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.882941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.882951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.883121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.883131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.883248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.883259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.883350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.883359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.883530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.883540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.883747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.883759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.883921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.883930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.884054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.884064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.884258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.884267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.884427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.884437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.884533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.884543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.884648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.884658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.884818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.884829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.884931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.884940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.885108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.885119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.885231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.885241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.885430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.885440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.885615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.885625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.885725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.885737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.885840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.885850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.886003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.886013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.886217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.886257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.886406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.886435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.886639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.886670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.886828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.886838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.887000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.887009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.887257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.887268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.887375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.887385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.887542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.887552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.887761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.887790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.887946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.887976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.888179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.888209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.888423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.888454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.888597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.888626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.888733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.888761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.888958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.888985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.889255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-07-15 17:08:49.889266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.303 qpair failed and we were unable to recover it. 00:26:43.303 [2024-07-15 17:08:49.889387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.889397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.889577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.889606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.889801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.889831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.890113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.890142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.890379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.890409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.890708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.890738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.890932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.890942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.891047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.891057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.891205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.891216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.891333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.891344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.891426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.891435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.891589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.891600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.891782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.891822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.892042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.892071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.892282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.892311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.892453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.892482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.892752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.892762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.892986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.892996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.893072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.893081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.893262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.893273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.893375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.893386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.893565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.893574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.893695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.893706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.893800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.893809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.893889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.893898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.894002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.894012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.894115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.894124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.894345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.894356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.894534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.894545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.894712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.894722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.894972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.894981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.895154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.895163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.895324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.895334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.895424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.895433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.895604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.895614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.895736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.895747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.895967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.895977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.896079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.896089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.896317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.896328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.896447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.896456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.304 [2024-07-15 17:08:49.896715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.304 [2024-07-15 17:08:49.896724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.304 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.896838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.896847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.897054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.897065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.897198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.897207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.897438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.897469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.897685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.897714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.897923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.897965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.898076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.898085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.898247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.898259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.898355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.898366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.898508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.898518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.898636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.898646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.898762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.898771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.898968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.898979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.899075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.899085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.899260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.899270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.899480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.899510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.899658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.899687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.899890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.899899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.899981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.899990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.900171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.900182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.900417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.900428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.900519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.900528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.900711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.900721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.900947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.900956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.901121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.901132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.901289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.901299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.901458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.901468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.901636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.901647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.901875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.901885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.902056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.902066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.902228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.902239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.902463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.902473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.902570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.902582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.902743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.902754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.902864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.902874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.902969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.902979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.903147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.903157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.903332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.903343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.903514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.903524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.903684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.305 [2024-07-15 17:08:49.903693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.305 qpair failed and we were unable to recover it. 00:26:43.305 [2024-07-15 17:08:49.903793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.903802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.903925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.903935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.904037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.904047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.904244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.904254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.904426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.904436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.904609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.904619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.904706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.904715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.904878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.904906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.905046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.905075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.905336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.905366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.905556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.905566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.905803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.905812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.906025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.906035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.906246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.906255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.906373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.906382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.906588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.906599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.906772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.906783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.906974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.906984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.907091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.907101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.907273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.907284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.907380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.907389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.907548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.907559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.907726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.907736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.907922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.907933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.908051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.908061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.908123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.908132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.908229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.908239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.908341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.908351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.908468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.908478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.908569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.908579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.908682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.908691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.908890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.908900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.909080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.909090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.909188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.909198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.909311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.909322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.306 [2024-07-15 17:08:49.909485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.306 [2024-07-15 17:08:49.909495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.306 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.909660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.909671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.909844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.909854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.910038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.910048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.910243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.910272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.910463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.910491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.910705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.910735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.910924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.910933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.911117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.911127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.911300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.911310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.911474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.911483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.911659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.911669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.911834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.911847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.912037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.912047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.912215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.912229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.912415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.912425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.912535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.912546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.912709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.912719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.912878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.912888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.913155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.913165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.913328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.913338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.913431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.913441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.913541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.913550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.913653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.913662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.913776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.913785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.913901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.913911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.914029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.914039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.914198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.914208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.914315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.914326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.914484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.914495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.914602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.914612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.914831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.914842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.915005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.915015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.915150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.915160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.915382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.915392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.915484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.915493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.915607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.915616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.915703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.915713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.915878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.915888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.916057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.916068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.916168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.916177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.916400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.916410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.916579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.307 [2024-07-15 17:08:49.916589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.307 qpair failed and we were unable to recover it. 00:26:43.307 [2024-07-15 17:08:49.916743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.916753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.916910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.916920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.917103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.917113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.917203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.917213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.917327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.917338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.917589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.917619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.917753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.917782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.917936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.917963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.918182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.918211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.918420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.918454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.918672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.918682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.918775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.918785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.918965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.918974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.919149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.919159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.919256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.919265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.919442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.919453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.919515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.919524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.919768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.919778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.919969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.919978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.920150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.920160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.920344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.920356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.920467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.920477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.920715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.920725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.920807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.920815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.921036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.921046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.921283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.921293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.921383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.921393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.921499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.921508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.921615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.921625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.921741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.921751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.921917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.921928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.922097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.922106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.922207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.922217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.922379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.922389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.922514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.922524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.922723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.922732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.922838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.922848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.922958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.922969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.923053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.308 [2024-07-15 17:08:49.923062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.308 qpair failed and we were unable to recover it. 00:26:43.308 [2024-07-15 17:08:49.923217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.309 [2024-07-15 17:08:49.923232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.309 qpair failed and we were unable to recover it. 00:26:43.309 [2024-07-15 17:08:49.923410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.309 [2024-07-15 17:08:49.923419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.309 qpair failed and we were unable to recover it. 00:26:43.309 [2024-07-15 17:08:49.923541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.309 [2024-07-15 17:08:49.923551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.309 qpair failed and we were unable to recover it. 00:26:43.309 [2024-07-15 17:08:49.923713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.309 [2024-07-15 17:08:49.923723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.309 qpair failed and we were unable to recover it. 00:26:43.309 [2024-07-15 17:08:49.923893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.309 [2024-07-15 17:08:49.923903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.309 qpair failed and we were unable to recover it. 00:26:43.309 [2024-07-15 17:08:49.924080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.309 [2024-07-15 17:08:49.924091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.309 qpair failed and we were unable to recover it. 00:26:43.309 [2024-07-15 17:08:49.924266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.309 [2024-07-15 17:08:49.924277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.309 qpair failed and we were unable to recover it. 00:26:43.309 [2024-07-15 17:08:49.924460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.309 [2024-07-15 17:08:49.924470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.309 qpair failed and we were unable to recover it. 00:26:43.309 [2024-07-15 17:08:49.924699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.309 [2024-07-15 17:08:49.924727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.309 qpair failed and we were unable to recover it. 00:26:43.309 [2024-07-15 17:08:49.924883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.309 [2024-07-15 17:08:49.924912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.309 qpair failed and we were unable to recover it. 00:26:43.309 [2024-07-15 17:08:49.925152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.309 [2024-07-15 17:08:49.925187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.309 qpair failed and we were unable to recover it. 00:26:43.309 [2024-07-15 17:08:49.925355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.309 [2024-07-15 17:08:49.925385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.309 qpair failed and we were unable to recover it. 00:26:43.309 [2024-07-15 17:08:49.925581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.309 [2024-07-15 17:08:49.925610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.309 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.925835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.925864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.926128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.926139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.926306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.926316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.926422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.926431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.926543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.926552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.926720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.926730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.926885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.926895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.927141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.927151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.927218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.927237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.927395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.927405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.927571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.927580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.927707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.927717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.927872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.927882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.928153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.928162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.928240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.928249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.928459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.928469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.928724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.928734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.928902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.928911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.594 [2024-07-15 17:08:49.929092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.594 [2024-07-15 17:08:49.929101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.594 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.929288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.929298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.929461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.929471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.929645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.929655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.929856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.929865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.930024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.930034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.930187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.930220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.930411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.930427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.930686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.930717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.930865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.930894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.931122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.931151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.931366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.931399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.931598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.931627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.931814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.931828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.932030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.932044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.932235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.932251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.932363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.932376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.932484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.932498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.932688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.932702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.932864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.932877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.932996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.933010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.933180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.933193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.933306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.933318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.933478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.933488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.933642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.933651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.933739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.933747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.933976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.934006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.934213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.934250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.934394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.934422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.934706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.934736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.935001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.935029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.935165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.935174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.935332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.935342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.935566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.935575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.935676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.935686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.935865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.935875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.935955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.935963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.936065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.936075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.936238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.936248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.936411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.936421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.936493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.936502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.936618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.936629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.936795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.936805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.936969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.936979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.937085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.937095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.937257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.937268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.937368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.937384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.937551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.937562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.595 qpair failed and we were unable to recover it. 00:26:43.595 [2024-07-15 17:08:49.937718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.595 [2024-07-15 17:08:49.937727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.937818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.937827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.937966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.937975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.938097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.938108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.938355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.938366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.938531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.938541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.938656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.938666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.938775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.938785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.938968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.938978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.939145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.939155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.939255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.939264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.939390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.939401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.939570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.939580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.939697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.939708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.939898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.939908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.940089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.940099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.940261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.940272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.940385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.940395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.940498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.940508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.940676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.940685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.940827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.940838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.940928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.940936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.941233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.941244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.941409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.941419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.941638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.941667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.941936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.941967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.942171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.942181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.942354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.942364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.942590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.942600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.942716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.942726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.942905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.942915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.943037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.943047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.943207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.943217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.943386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.943396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.943493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.943501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.943609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.943619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.943712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.943721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.943889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.943899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.944008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.944019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.944116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.944127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.944214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.596 [2024-07-15 17:08:49.944226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.596 qpair failed and we were unable to recover it. 00:26:43.596 [2024-07-15 17:08:49.944457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.944468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.944566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.944576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.944771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.944782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.944863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.944872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.945042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.945052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.945301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.945311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.945458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.945468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.945640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.945649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.945876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.945885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.945999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.946010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.946181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.946192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.946411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.946421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.946554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.946564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.946727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.946737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.946902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.946911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.947017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.947027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.947145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.947155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.947388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.947398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.947645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.947655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.947837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.947847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.948064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.948074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.948179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.948189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.948436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.948446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.948668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.948678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.948858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.948868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.949107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.949117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.949221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.949235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.949417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.949427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.949597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.949606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.949698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.949709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.949882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.949892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.950009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.950019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.950187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.950197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.950305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.950315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.950485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.950495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.950671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.950680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.950769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.950778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.950876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.950888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.951116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.951126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.951233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.951244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.597 qpair failed and we were unable to recover it. 00:26:43.597 [2024-07-15 17:08:49.951424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.597 [2024-07-15 17:08:49.951434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.951495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.951504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.951691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.951701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.951796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.951805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.952040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.952050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.952250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.952260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.952321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.952330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.952577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.952587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.952796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.952806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.953032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.953042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.953151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.953161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.953334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.953344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.953573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.953583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.953690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.953700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.953865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.953875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.954075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.954085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.954191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.954201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.954372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.954383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.954486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.954496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.954603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.954613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.954718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.954728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.954837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.954847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.955038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.955048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.955272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.955282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.955452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.955462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.955630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.955640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.955920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.955930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.956033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.956043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.956230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.956241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.956412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.956422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.956591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.956601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.956852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.956881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.957028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.598 [2024-07-15 17:08:49.957057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.598 qpair failed and we were unable to recover it. 00:26:43.598 [2024-07-15 17:08:49.957196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.957236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.957526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.957555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.957705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.957734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.958024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.958053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.958190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.958233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.958468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.958498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.958696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.958726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.958942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.958969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.959127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.959137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.959358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.959368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.959536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.959546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.959658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.959668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.959841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.959851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.960044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.960073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.960272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.960302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.960452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.960481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.960706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.960734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.960962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.960971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.961102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.961112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.961207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.961217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.961404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.961414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.961516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.961526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.961700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.961710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.961870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.961880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.962078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.962107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.962321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.962351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.962546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.962576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.962793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.962822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.963093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.963102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.963259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.963270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.963382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.963392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.963620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.963630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.963814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.963824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.964032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.964061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.964211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.964246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.964447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.964477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.964712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.964742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.964908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.964937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.965141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.965167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.965334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.965344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.599 [2024-07-15 17:08:49.965572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.599 [2024-07-15 17:08:49.965601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.599 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.965837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.965865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.966083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.966113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.966336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.966347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.966511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.966523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.966771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.966781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.967040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.967049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.967173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.967183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.967290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.967300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.967412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.967422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.967529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.967539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.967719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.967729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.967926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.967936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.968038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.968048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.968300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.968310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.968468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.968478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.968652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.968662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.968831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.968841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.968998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.969027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.969178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.969208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.969475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.969505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.969717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.969747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.969983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.969993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.970237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.970247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.970341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.970351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.970463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.970473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.970647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.970658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.970828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.970838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.970946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.970955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.971144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.971154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.971362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.971372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.971530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.971540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.971642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.971652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.971879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.971889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.972063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.972073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.972247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.972257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.972420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.972430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.972548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.972558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.972656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.972669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.972828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.972837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.600 [2024-07-15 17:08:49.973065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-07-15 17:08:49.973075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.600 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.973260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.973271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.973374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.973384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.973493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.973503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.973725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.973736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.973845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.973855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.974004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.974014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.974116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.974126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.974242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.974253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.974356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.974365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.974478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.974488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.974654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.974664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.974761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.974771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.974935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.974945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.975136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.975146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.975262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.975273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.975377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.975388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.975516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.975526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.975692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.975702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.975796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.975805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.975913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.975924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.976172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.976182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.976291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.976302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.976504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.976514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.976757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.976767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.976889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.976899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.977022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.977032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.977194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.977205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.977321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.977331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.977491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.977502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.977659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.977669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.977835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.977846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.977947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.977957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.978090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.978100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.978277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.978288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.978448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.978458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.978697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.978708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.978799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.978809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.978974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.978984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.979169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.979181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.979363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.979374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.601 qpair failed and we were unable to recover it. 00:26:43.601 [2024-07-15 17:08:49.979586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-07-15 17:08:49.979596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.979703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.979713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.979909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.979919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.980029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.980039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.980307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.980317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.980457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.980467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.980581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.980591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.980697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.980706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.980806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.980818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.980991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.981001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.981106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.981117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.981287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.981297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.981454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.981465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.981565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.981575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.981671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.981681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.981849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.981859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.982030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.982040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.982219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.982281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.982528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.982558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.982681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.982710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.982981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.982991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.983092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.983102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.983371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.983381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.983496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.983506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.983622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.983631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.983754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.983765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.983853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.983862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.984044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.984055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.984281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.984291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.984463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.984473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.984585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.984602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.984771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.984782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.984943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.984953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.985050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.985062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.985188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.985198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.985305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.985316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.602 [2024-07-15 17:08:49.985429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-07-15 17:08:49.985439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.602 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.985544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.985553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.985736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.985746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.985916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.985926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.986180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.986210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.986378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.986409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.986630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.986660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.986887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.986897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.986997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.987007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.987116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.987126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.987233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.987243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.987467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.987478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.987583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.987593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.987703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.987713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.987909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.987919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.988090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.988100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.988286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.988297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.988373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.988382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.988474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.988484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.988640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.988650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.988752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.988762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.988866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.988877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.989108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.989118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.989222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.989295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.989457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.989467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.989623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.989633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.989733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.989742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.989833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.989845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.990003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.990013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.990108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.990118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.990212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.990222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.990331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.990341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.990407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.990416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.990515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.990525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.990637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.990650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.990853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.990863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.991031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.991042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.991148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.991159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.991270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.991280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.991378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-07-15 17:08:49.991388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.603 qpair failed and we were unable to recover it. 00:26:43.603 [2024-07-15 17:08:49.991501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.991511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.991654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.991665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.991832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.991842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.992001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.992011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.992265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.992276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.992489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.992499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.992672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.992683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.992842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.992852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.993111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.993121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.993330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.993361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.993570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.993599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.993795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.993825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.994026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.994036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.994126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.994136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.994328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.994339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.994576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.994586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.994793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.994824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.995063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.995093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.995249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.995260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.995362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.995372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.995595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.995605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.995776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.995786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.995976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.995986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.996194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.996204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.996324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.996335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.996580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.996589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.996694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.996704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.996938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.996967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.997127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.997157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.997425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.997456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.997671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.997700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.997962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.997991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.998239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.998250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.998406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.998417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.998518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.998531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.998639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.998649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.998817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.998827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.999007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.999018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.999241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.999253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.999332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.999341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.604 [2024-07-15 17:08:49.999432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.604 [2024-07-15 17:08:49.999442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.604 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:49.999517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:49.999526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:49.999687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:49.999697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:49.999916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:49.999927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.000099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.000109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.000230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.000241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.000334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.000346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.000516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.000527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.000648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.000659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.000765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.000775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.000952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.000963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.001137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.001147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.001251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.001262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.001369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.001378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.001483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.001494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.001653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.001663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.001774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.001783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.001885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.001895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.002013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.002023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.002183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.002194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.002295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.002305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.002473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.002483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.002660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.002670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.002847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.002858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.002954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.002965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.003063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.003073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.003243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.003254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.003416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.003426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.003518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.003528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.003753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.003763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.003925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.003935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.004115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.004125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.004220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.004235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.004412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.004422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.004509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.004523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.004628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.004638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.004830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.004840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.004950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.004960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.005054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.005064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.005222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.005237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.005357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.005367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-07-15 17:08:50.005557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-07-15 17:08:50.005567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.005773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.005783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.005957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.005967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.006079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.006089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.006183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.006193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.006305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.006316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.006448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.006458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.006618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.006629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.006784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.006794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.006900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.006910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.007166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.007176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.007295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.007306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.007401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.007411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.007510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.007520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.007623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.007632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.007808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.007818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.007913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.007923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.008148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.008158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.008324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.008335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.008435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.008445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.008566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.008575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.008816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.008827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.008931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.008941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.009099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.009110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.009286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.009297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.009402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.009413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.009541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.009551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.009641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.009651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.009748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.009759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.009871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.009881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.009986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.009996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.010177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.010187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.010266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.010275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.010399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.010411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.010578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.010588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.010851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.010861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.011025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.011035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.011200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.011211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.011319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.011329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.011443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.011454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.011616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-07-15 17:08:50.011626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-07-15 17:08:50.011803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.011813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.011975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.011985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.012078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.012088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.012204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.012214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.012399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.012410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.012592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.012602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.012763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.012774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.012876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.012886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.012987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.012997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.013153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.013164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.013270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.013280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.013422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.013432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.013606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.013616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.013778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.013787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.013888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.013898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.014008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.014019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.014207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.014217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.014394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.014404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.014567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.014577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.014739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.014749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.014903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.014914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.015022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.015032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.015192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.015202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.015361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.015372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.015465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.015475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.015575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.015585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.015696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.015706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.015880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.015890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.016048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.016058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.016216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.016229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.016406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.016416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.016528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.016537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.016706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.016717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.016823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.016834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-07-15 17:08:50.016996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-07-15 17:08:50.017006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.017167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.017177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.017352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.017363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.017464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.017474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.017576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.017586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.017686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.017696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.017792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.017802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.017911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.017921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.018112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.018122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.018272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.018283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.018377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.018387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.018571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.018581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.018698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.018708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.018811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.018821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.018926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.018936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.019096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.019106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.019372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.019383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.019482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.019492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.019651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.019661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.019818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.019828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.019954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.019964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.020119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.020129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.020288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.020298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.020555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.020564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.020720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.020731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.020948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.020958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.021057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.021067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.021310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.021321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.021433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.021443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.021676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.021686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.021884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.021894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.022009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.022019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.022193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.022203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.022321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.022331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.022534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.022543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.022622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.022631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.022788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.022799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.022914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.022924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.023037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.023049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.023167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.023178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.023337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.023347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-07-15 17:08:50.023456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-07-15 17:08:50.023466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.023569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.023579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.023758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.023768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.023925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.023935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.024098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.024108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.024218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.024231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.024464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.024474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.024577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.024587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.024775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.024785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.024945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.024955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.025116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.025126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.025254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.025265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.025337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.025346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.025443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.025452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.025642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.025652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.025742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.025752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.025943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.025953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.026055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.026064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.026172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.026181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.026244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.026253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.026426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.026436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.026528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.026538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.026633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.026642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.026837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.026847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.026983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.026993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.027114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.027123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.027221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.027250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.027521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.027532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.027660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.027670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.027778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.027788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.027886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.027896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.028016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.028027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.028155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.028165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.028311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.028321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.028530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.028540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.028680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.028690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.028825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.028835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.028998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.029038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.029200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.029215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.029384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.029428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.029620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 17:08:50.029656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-07-15 17:08:50.029846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.029922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.030133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.030157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.030320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.030332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.030461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.030472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.030589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.030600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.030728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.030739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.030854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.030864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.030970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.030980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.031125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.031134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.031316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.031327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.031453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.031475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.031636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.031646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.031759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.031769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.031969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.031978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.032093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.032103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.032200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.032210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.032380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.032391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.032595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.032604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.032710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.032720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.032820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.032830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.032921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.032931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.033069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.033079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.033250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.033260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.033407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.033439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.033653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.033669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.033769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.033783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.033952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.033966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.034134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.034148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.034282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.034296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.034476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.034490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.034590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.034603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.034687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.034700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.034980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.034994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.035235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.035250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.035371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.035385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.035570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.035582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.035682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.035694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.035799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.035809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.035909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.035919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.036119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.036129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.036301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 17:08:50.036331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-07-15 17:08:50.036494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.036524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.036757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.036787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.036894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.036904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.037020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.037029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.037190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.037200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.037365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.037375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.037605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.037615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.037735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.037745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.037838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.037849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.037918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.037928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.038028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.038038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.038133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.038142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.038317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.038327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.038443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.038453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.038676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.038706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.038862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.038891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.039045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.039071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.039242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.039252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.039369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.039378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.039503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.039513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.039664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.039673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.039788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.039798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.039905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.039914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.040136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.040146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.040303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.040313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.040395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.040405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.040468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.040477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.040588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.040598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.040695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.040705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.040858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.040868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.041023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.041034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.041134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.041144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.041370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.041382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-07-15 17:08:50.041577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 17:08:50.041587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.041701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.041711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.041996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.042031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.042272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.042303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.042473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.042504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.042724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.042754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.042898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.042927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.043071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.043100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.043293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.043303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.043525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.043535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.043668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.043678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.043867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.043909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.044125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.044155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.044312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.044342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.044483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.044512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.044677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.044707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.044912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.044941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.045086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.045115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.045312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.045343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.045471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.045481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.045645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.045655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.045879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.045889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.046088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.046098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.046371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.046381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.046607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.046617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.046808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.046820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.047018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.047028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.047143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.047152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.047265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.047275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.047471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.047491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.047696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.047710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.047886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.047901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.048165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.048179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.048291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.048306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.048434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.048449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.048548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.048561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.048669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.048683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.048915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.048929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.049048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.049062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.049248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.049263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.049367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.049381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-07-15 17:08:50.049488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-07-15 17:08:50.049502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.049590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.049611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.049731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.049745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.049842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.049856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.050088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.050101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.050327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.050338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.050501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.050512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.050740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.050749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.050817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.050827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.050984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.050994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.051197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.051207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.051310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.051320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.051504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.051514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.051621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.051631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.051717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.051727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.051872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.051883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.051996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.052006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.052181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.052191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.052316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.052327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.052554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.052564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.052661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.052670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.052782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.052792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.052958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.052968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.053078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.053087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.053292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.053302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.053452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.053462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.053619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.053629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.053819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.053848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.053996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.054030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.054250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.054281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.054504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.054533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.054688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.054718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.054868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.054898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.055100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.055114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.055348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.055362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.055467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.055481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.055586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.055600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.055677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.055690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.055889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.055902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.056104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.056142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.056266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-07-15 17:08:50.056297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-07-15 17:08:50.056480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.056516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.056746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.056777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.056978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.057007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.057206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.057247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.057452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.057466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.057588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.057602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.057797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.057811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.058090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.058104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.058338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.058353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.058534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.058548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.058719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.058733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.058897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.058911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.059033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.059047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.059145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.059158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.059345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.059360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.059538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.059552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.059741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.059755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.059930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.059944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.060105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.060119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.060288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.060302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.060469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.060482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.060598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.060612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.060797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.060811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.060994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.061008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.061126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.061140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.061249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.061263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.061422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.061436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.061563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.061579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.061754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.061768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.061877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.061891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.062008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.062022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.062194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.062208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.062385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.062400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.062506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.062520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.062627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.062640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.062745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.062758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.062944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.062958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.063077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.063090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.063270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.063286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.063465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.063478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.063701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-07-15 17:08:50.063715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-07-15 17:08:50.063809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.063823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.063989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.064003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.064256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.064270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.064436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.064450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.064662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.064676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.064831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.064846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.065038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.065052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.065238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.065253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.065357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.065370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.065532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.065546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.065730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.065743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.066023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.066037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.066133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.066146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.066313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.066327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.066490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.066503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.066599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.066613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.066805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.066819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.067002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.067016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.067113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.067125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.067295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.067305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.067472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.067482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.067585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.067596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.067760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.067770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.067857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.067867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.068026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.068037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.068118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.068128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.068207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.068218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.068380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.068391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.068480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.068490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.068603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.068614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.068804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.068814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-07-15 17:08:50.068983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-07-15 17:08:50.068993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.069109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.069119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.069345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.069356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.069434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.069444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.069532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.069541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.069608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.069617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.069840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.069850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.069950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.069960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.070052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.070061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.070313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.070323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.070493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.070503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.070616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.070626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.070735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.070746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.071027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.071057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.071198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.071236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.071368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.071398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.071692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.071722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.071891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.071920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.072029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.072039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.072143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.072153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.072309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.072319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.072555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.072565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.072726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.072753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.072957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.072986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.073185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.073215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.073483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.073493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.073687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.073698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.073857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.073867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.074041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.074051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.074159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.074169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.074341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.074351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.074471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.074481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-07-15 17:08:50.074587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-07-15 17:08:50.074597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.074701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.074714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.074957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.074967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.075177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.075189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.075266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.075276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.075383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.075393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.075498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.075509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.075673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.075682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.075773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.075782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.075940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.075950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.076046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.076057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.076172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.076182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.076287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.076297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.076372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.076382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.076459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.076468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.076643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.076653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.076742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.076752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.076923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.076933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.077070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.077080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.077230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.077240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.077361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.077371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.077434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.077443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.077546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.077557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.077716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.077725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.077889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.077900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.078072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.078082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.078244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.078255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.078375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.078385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.078546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.078556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.078727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.078738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.078956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.078987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.079136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.079166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.079313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.079323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.079394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.079403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.079491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.079500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.079622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.079632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.079789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.079799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.079956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.079966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.080128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.080138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.080302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.080321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.080441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.080452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-07-15 17:08:50.080634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-07-15 17:08:50.080644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.080869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.080879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.080986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.081000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.081115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.081125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.081223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.081248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.081451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.081461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.081541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.081550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.081787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.081797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.081962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.081972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.082147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.082158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.082320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.082331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.082443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.082453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.082540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.082549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.082704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.082714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.082903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.082933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.083202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.083243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.083403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.083432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.083647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.083657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.083810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.083820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.083982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.083992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.084195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.084206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.084442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.084452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.084520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.084530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.084703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.084713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.084890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.084901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.085026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.085037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.085149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.085160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.085269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.085279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.085373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.085384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.085497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.085508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.085772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.085802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.085961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.085991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.086198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.086234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.086381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.086392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.086545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.086555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.086713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.086724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.086812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.086821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.086915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.086924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.087113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.087123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.087359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.087370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.087529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-07-15 17:08:50.087539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-07-15 17:08:50.087695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.087705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.087811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.087824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.088087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.088097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.088257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.088268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.088425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.088435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.088546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.088556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.088667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.088677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.088931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.088941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.089103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.089113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.089287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.089298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.089396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.089406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.089509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.089519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.089783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.089813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.090013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.090043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.090183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.090212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.090427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.090437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.090629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.090639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.090872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.090887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.091009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.091020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.091180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.091190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.091417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.091428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.091657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.091687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.091833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.091863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.092082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.092092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.092173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.092184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.092294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.092304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.092409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.092420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.092593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.092603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.092853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.092863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.092973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.092982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.093203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.093213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.093281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.093291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.093475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.093485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.093657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.093687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.093900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.093930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.094152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.094181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.094314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.094325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.094569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.094578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.094749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.094758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.095030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-07-15 17:08:50.095039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-07-15 17:08:50.095130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.095140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.095235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.095247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.095401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.095410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.095654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.095683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.095888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.095917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.096105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.096135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.096390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.096400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.096510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.096520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.096686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.096696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.096883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.096893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.097039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.097076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.097217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.097254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.097452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.097482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.097702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.097732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.098019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.098048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.098273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.098305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.098441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.098471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.098625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.098655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.098904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.098933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.099152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.099181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.099429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.099460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.099605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.099634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.099767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.099797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.099913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.099943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.100103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.100133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.100351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.100382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.100540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.100550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.100779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.100808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.101033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.101063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.101334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.101343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.101509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.101519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.101628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.101638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.101826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.101835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.102029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-07-15 17:08:50.102039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-07-15 17:08:50.102265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.102275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.102444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.102455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.102658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.102688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.102834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.102863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.103010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.103045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.103216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.103230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.103411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.103440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.103648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.103682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.103919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.103949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.104114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.104143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.104255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.104286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.104437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.104466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.104681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.104690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.104915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.104935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.105112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.105122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.105210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.105220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.105401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.105411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.105584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.105613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.105811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.105841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.106057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.106087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.106268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.106299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.106512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.106543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.106688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.106718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.106916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.106946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.107208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.107217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.107323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.107332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.107495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.107505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.107608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.107617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.107818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.107828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.107942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.107952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.108134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.108144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.108269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.108279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.108505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.108535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.108745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.108775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.109022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.109052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.109317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.109348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.109550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.109579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.109812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.109841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.110006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.110035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.110325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.110335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.110504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-07-15 17:08:50.110514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-07-15 17:08:50.110624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.110634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.110817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.110827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.111036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.111046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.111244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.111255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.111410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.111430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.111606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.111635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.111781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.111815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.112098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.112127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.112285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.112314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.112531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.112561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.112705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.112734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.112944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.112973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.113122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.113132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.113240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.113249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.113482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.113492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.113596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.113605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.113707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.113717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.113885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.113895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.114027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.114037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.114195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.114205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.114498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.114530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.114795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.114824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.115022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.115051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.115340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.115370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.115629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.115639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.115795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.115805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.116034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.116063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.116283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.116314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.116508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.116537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.116671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.116681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.116875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.116885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.117059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.117069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.117233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.117244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.117350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.117360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.117464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.117474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.117698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.117708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.117946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.117956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.118051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.118060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.118217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.118231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.118400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.118424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.623 qpair failed and we were unable to recover it. 00:26:43.623 [2024-07-15 17:08:50.118543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.623 [2024-07-15 17:08:50.118572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.118723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.118753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.119045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.119074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.119296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.119326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.119624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.119653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.119787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.119817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.120115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.120154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.120452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.120463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.120663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.120673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.120842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.120871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.121165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.121194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.121448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.121478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.121623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.121652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.121852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.121882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.122094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.122124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.122311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.122342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.122545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.122554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.122644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.122653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.122825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.122835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.122953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.122963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.123063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.123073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.123231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.123242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.123397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.123407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.123564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.123574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.123806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.123836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.124037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.124067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.124336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.124371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.124495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.124505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.124670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.124680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.124847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.124857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.124960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.124970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.125195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.125204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.125318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.125329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.125464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.125474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.125590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.125600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.125778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.125788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.125965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.125975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.126162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.126172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.126336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.126347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.126571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.126580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.126683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.624 [2024-07-15 17:08:50.126693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.624 qpair failed and we were unable to recover it. 00:26:43.624 [2024-07-15 17:08:50.126901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.126911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.127070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.127080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.127251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.127262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.127358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.127367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.127618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.127628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.127797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.127809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.128060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.128070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.128248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.128258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.128443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.128453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.128640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.128669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.128803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.128832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.129048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.129078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.129291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.129301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.129530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.129539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.129788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.129798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.130070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.130080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.130249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.130260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.130437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.130447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.130639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.130669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.130889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.130918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.131063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.131092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.131303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.131314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.131506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.131516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.131599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.131608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.131831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.131841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.132031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.132041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.132137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.132146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.132249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.132258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.132424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.132434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.132549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.132558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.132753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.132762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.132932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.132942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.133040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.133050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.133206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.133216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.625 qpair failed and we were unable to recover it. 00:26:43.625 [2024-07-15 17:08:50.133337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.625 [2024-07-15 17:08:50.133347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.133519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.133529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.133664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.133674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.133844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.133854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.134023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.134032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.134189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.134199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.134402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.134412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.134604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.134614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.134773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.134783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.134950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.134960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.135137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.135147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.135341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.135352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.135521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.135531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.135768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.135797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.135950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.135979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.136283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.136293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.136534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.136544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.136645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.136655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.136814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.136824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.136996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.137006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.137242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.137272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.137427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.137456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.137698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.137727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.137966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.137995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.138208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.138266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.138545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.138555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.138733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.138742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.138912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.138922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.139129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.139139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.139296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.139307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.139472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.139482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.139647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.139676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.139812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.139842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.140008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.140038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.140242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.140272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.140414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.140444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.140711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.140721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.140888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.140898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.141147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.141157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.141317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.141327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.141554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.626 [2024-07-15 17:08:50.141564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.626 qpair failed and we were unable to recover it. 00:26:43.626 [2024-07-15 17:08:50.141700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.141709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.141830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.141840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.141944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.141953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.142067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.142077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.142187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.142196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.142298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.142308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.142504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.142534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.142761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.142791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.142969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.142998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.143153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.143183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.143341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.143377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.143582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.143612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.143876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.143905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.144059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.144088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.144252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.144284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.144502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.144511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.144679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.144688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.144870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.144880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.144975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.144984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.145160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.145170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.145341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.145351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.145468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.145478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.145700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.145709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.145887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.145897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.146146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.146155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.146316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.146326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.146495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.146505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.146711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.146740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.146949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.146978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.147239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.147250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.147449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.147459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.147641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.147650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.147873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.147883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.148050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.148059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.148230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.148240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.148341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.148351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.148521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.148531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.148734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.148764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.148914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.148944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.149067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.149096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.149373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.627 [2024-07-15 17:08:50.149383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.627 qpair failed and we were unable to recover it. 00:26:43.627 [2024-07-15 17:08:50.149476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.149485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.149753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.149763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.149950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.149979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.150221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.150258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.150459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.150488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.150686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.150696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.150787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.150796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.150951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.150961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.151184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.151194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.151310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.151322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.151442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.151452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.151579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.151588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.151780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.151789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.151959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.151968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.152126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.152136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.152245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.152255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.152442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.152452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.152605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.152615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.152714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.152724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.152886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.152896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.153072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.153082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.153179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.153189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.153364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.153374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.153603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.153613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.153791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.153800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.153908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.153918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.154015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.154024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.154191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.154201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.154311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.154322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.154552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.154562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.154761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.154771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.155052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.155080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.155303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.155333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.155586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.155616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.155904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.155934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.156139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.156169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.156386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.156422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.156610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.156639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.156784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.156814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.157027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.157056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.628 [2024-07-15 17:08:50.157315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.628 [2024-07-15 17:08:50.157325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.628 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.157443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.157452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.157622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.157631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.157797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.157807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.158061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.158090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.158330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.158361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.158518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.158547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.158716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.158736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.158911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.158921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.159100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.159121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.159284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.159294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.159467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.159477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.159601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.159610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.159867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.159876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.160067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.160076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.160177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.160186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.160373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.160383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.160615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.160645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.160791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.160820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.161107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.161136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.161361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.161391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.161589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.161618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.161885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.161915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.162252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.162283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.162464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.162474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.162595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.162605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.162723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.162733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.162842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.162852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.162967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.162977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.163069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.163078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.163321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.163331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.163452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.163462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.163662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.163672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.163832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.163842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.164009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.164019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.164187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.164197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.164278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.164289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.164514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.164524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.164681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.164691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.164852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.164881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.629 [2024-07-15 17:08:50.165033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.629 [2024-07-15 17:08:50.165062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.629 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.165328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.165358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.165660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.165689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.165878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.165907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.166054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.166083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.166286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.166316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.166494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.166503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.166674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.166703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.166995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.167024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.167265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.167296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.167451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.167480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.167688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.167698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.167814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.167824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.167998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.168007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.168177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.168186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.168301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.168311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.168466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.168475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.168630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.168639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.168781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.168791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.168960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.168970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.169088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.169098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.169267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.169277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.169385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.169395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.169619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.169629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.169816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.169826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.169983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.169993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.170147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.170157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.170375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.170406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.170619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.170649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.170791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.170820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.171112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.171141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.171346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.171377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.171585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.171615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.171817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.171846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.172061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.172089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.172377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.630 [2024-07-15 17:08:50.172387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.630 qpair failed and we were unable to recover it. 00:26:43.630 [2024-07-15 17:08:50.172479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.172491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.172662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.172671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.172848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.172858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.173015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.173025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.173324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.173353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.173502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.173531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.173745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.173775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.173981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.174011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.174277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.174307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.174481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.174511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.174760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.174789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.175015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.175044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.175284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.175314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.175423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.175433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.175599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.175609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.175781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.175791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.176037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.176047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.176271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.176281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.176448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.176458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.176567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.176576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.176816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.176826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.176922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.176931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.177044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.177054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.177302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.177312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.177430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.177439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.177663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.177673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.177770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.177780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.178033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.178043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.178160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.178170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.178332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.178343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.178539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.178548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.178705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.178715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.178827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.178837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.179005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.179014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.179167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.179177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.179433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.179463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.179659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.179689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.179888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.179917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.180051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.180080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.180281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.180311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.180477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.631 [2024-07-15 17:08:50.180489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.631 qpair failed and we were unable to recover it. 00:26:43.631 [2024-07-15 17:08:50.180663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.180706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.180932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.180962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.181099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.181129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.181360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.181370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.181539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.181549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.181760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.181788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.182057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.182087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.182297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.182328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.182560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.182589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.182760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.182770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.182961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.182990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.183140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.183170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.183329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.183359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.183648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.183677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.183920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.183949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.184117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.184146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.184369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.184412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.184637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.184666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.184845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.184855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.185106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.185135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.185289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.185318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.185591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.185601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.185823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.185833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.186085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.186115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.186270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.186301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.186527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.186557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.186806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.186816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.186981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.186991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.187239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.187249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.187428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.187457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.187696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.187725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.187935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.187965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.188167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.188196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.188349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.188380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.188594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.188624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.188847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.188877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.189141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.189170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.189389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.189419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.189571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.189600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.189865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.189877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.632 [2024-07-15 17:08:50.190097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.632 [2024-07-15 17:08:50.190107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.632 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.190264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.190274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.190375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.190384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.190567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.190577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.190757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.190787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.190988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.191018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.191245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.191275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.191515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.191525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.191693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.191702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.191801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.191811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.191911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.191921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.192042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.192052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.192242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.192252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.192433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.192443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.192557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.192566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.192742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.192752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.192852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.192861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.193085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.193095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.193262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.193272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.193462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.193472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.193653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.193663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.193830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.193840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.193940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.193950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.194193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.194203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.194399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.194409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.194558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.194588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.194797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.194826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.195043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.195072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.195221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.195260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.195524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.195553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.195750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.195779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.196019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.196049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.196278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.196309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.196539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.196568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.196833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.196862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.197020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.197049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.197234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.197265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.197546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.197575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.197707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.197736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.197954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.197988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.198204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.198254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.633 [2024-07-15 17:08:50.198530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.633 [2024-07-15 17:08:50.198560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.633 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.198824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.198853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.199014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.199042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.199246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.199277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.199491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.199511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.199712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.199722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.199822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.199832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.200053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.200064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.200296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.200305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.200413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.200422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.200575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.200585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.200756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.200766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.200876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.200886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.201061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.201071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.201238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.201249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.201363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.201373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.201535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.201545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.201718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.201745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.201974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.202004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.202163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.202192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.202498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.202508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.202753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.202763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.202871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.202881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.203127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.203136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.203358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.203368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.203536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.203546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.203655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.203665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.203771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.203781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.204028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.204038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.204196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.204206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.204458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.204468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.204568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.204578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.204759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.204769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.204845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.204854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.205080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.205090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.205230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.634 [2024-07-15 17:08:50.205240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.634 qpair failed and we were unable to recover it. 00:26:43.634 [2024-07-15 17:08:50.205336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.205348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.205523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.205533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.205775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.205809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.205940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.205969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.206123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.206153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.206434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.206478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.206600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.206609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.206709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.206719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.206962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.206972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.207216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.207230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.207383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.207393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.207624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.207654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.207920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.207950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.208242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.208272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.208535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.208565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.208828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.208857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.208995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.209025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.209167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.209196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.209499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.209529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.209745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.209774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.210062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.210091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.210303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.210333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.210489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.210518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.210730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.210740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.210988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.210998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.211155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.211165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.211274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.211284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.211389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.211399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.211646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.211656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.211752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.211761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.211966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.211976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.212054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.212063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.212236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.212246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.212427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.212436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.212523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.212533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.212638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.212647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.212820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.212830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.212918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.212927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.213035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.213045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.635 [2024-07-15 17:08:50.213151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.635 [2024-07-15 17:08:50.213161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.635 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.213322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.213332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.213499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.213509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.213662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.213674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.213926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.213936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.214045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.214054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.214210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.214220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.214330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.214339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.214500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.214509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.214689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.214699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.214808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.214818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.214909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.214918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.215034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.215044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.215124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.215133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.215301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.215312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.215473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.215482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.215559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.215568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.215726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.215735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.215897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.215906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.216087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.216116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.216342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.216372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.216684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.216714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.216914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.216942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.217175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.217204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.217489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.217519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.217737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.217766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.218058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.218068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.218239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.218249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.218469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.218479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.218706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.218735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.219018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.219047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.219259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.219290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.219577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.219606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.219875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.219884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.220061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.220071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.220203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.220213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.220493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.220522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.220764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.220794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.221030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.221059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.221284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.221314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.221537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.221547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.221794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.221804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.221959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.636 [2024-07-15 17:08:50.221969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.636 qpair failed and we were unable to recover it. 00:26:43.636 [2024-07-15 17:08:50.222128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.222151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.222385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.222415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.222622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.222651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.222965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.222975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.223149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.223159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.223281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.223292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.223401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.223411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.223654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.223664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.223851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.223861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.224108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.224137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.224425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.224455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.224659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.224688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.224990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.225000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.225233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.225243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.225374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.225384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.225632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.225642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.225735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.225744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.225965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.225975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.226243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.226253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.226373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.226382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.226449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.226458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.226560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.226570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.226735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.226745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.226899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.226908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.227004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.227017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.227179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.227189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.227365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.227376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.227644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.227713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.227906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.227939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.228168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.228199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.228561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.228640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.228930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.228963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.229190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.229221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.229520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.229551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.229706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.229736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.229993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.230007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.230195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.230209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.230359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.230373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.230583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.230613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.230755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.230784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.230986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.231025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.231329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.231360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.231584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.231615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.231823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.231837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.637 qpair failed and we were unable to recover it. 00:26:43.637 [2024-07-15 17:08:50.232018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.637 [2024-07-15 17:08:50.232032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.232283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.232297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.232462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.232476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.232682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.232711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.232990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.233020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.233248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.233278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.233498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.233528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.233773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.233787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.233954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.233967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.234139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.234152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.234331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.234345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.234503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.234516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.234765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.234794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.235016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.235045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.235253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.235283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.638 [2024-07-15 17:08:50.235437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.638 [2024-07-15 17:08:50.235467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.638 qpair failed and we were unable to recover it. 00:26:43.925 [2024-07-15 17:08:50.235794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.925 [2024-07-15 17:08:50.235867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.925 qpair failed and we were unable to recover it. 00:26:43.925 [2024-07-15 17:08:50.235995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.925 [2024-07-15 17:08:50.236007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.925 qpair failed and we were unable to recover it. 00:26:43.925 [2024-07-15 17:08:50.236164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.925 [2024-07-15 17:08:50.236174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.925 qpair failed and we were unable to recover it. 00:26:43.925 [2024-07-15 17:08:50.236351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.925 [2024-07-15 17:08:50.236362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.925 qpair failed and we were unable to recover it. 00:26:43.925 [2024-07-15 17:08:50.236465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.925 [2024-07-15 17:08:50.236475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.925 qpair failed and we were unable to recover it. 00:26:43.925 [2024-07-15 17:08:50.236698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.925 [2024-07-15 17:08:50.236707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.925 qpair failed and we were unable to recover it. 00:26:43.925 [2024-07-15 17:08:50.236872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.925 [2024-07-15 17:08:50.236883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.925 qpair failed and we were unable to recover it. 00:26:43.925 [2024-07-15 17:08:50.237136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.925 [2024-07-15 17:08:50.237172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.925 qpair failed and we were unable to recover it. 00:26:43.925 [2024-07-15 17:08:50.237499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.925 [2024-07-15 17:08:50.237528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.925 qpair failed and we were unable to recover it. 00:26:43.925 [2024-07-15 17:08:50.237691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.925 [2024-07-15 17:08:50.237719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.925 qpair failed and we were unable to recover it. 00:26:43.925 [2024-07-15 17:08:50.237878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.925 [2024-07-15 17:08:50.237907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.925 qpair failed and we were unable to recover it. 00:26:43.925 [2024-07-15 17:08:50.238119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.925 [2024-07-15 17:08:50.238148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.925 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.238357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.238387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.238613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.238641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.238854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.238864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.239054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.239064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.239236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.239246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.239417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.239427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.239704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.239733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.239887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.239916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.240184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.240213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.240437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.240468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.240732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.240761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.240912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.240921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.241097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.241106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.241354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.241364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.241558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.241568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.241682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.241692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.241904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.241914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.242148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.242158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.242277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.242287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.242535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.242544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.242716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.242725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.242904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.242913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.243036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.243045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.243295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.243321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.243558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.243588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.243870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.243898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.244116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.244145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.244428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.244458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.244750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.244779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.244933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.244961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.245251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.245281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.245568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.245597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.245800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.245810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.246032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.246042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.246133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.246142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.246319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.246331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.246517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.246527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.246692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.246721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.246918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.246947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.926 [2024-07-15 17:08:50.247112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.926 [2024-07-15 17:08:50.247141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.926 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.247355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.247385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.247612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.247641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.247762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.247792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.247993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.248023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.248316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.248346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.248516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.248525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.248661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.248670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.248860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.248889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.249104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.249133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.249315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.249346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.249566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.249595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.249743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.249772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.249991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.250021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.250258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.250290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.250555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.250585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.250740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.250770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.250963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.250972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.251131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.251141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.251250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.251260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.251396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.251406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.251564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.251574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.251774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.251784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.251963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.251973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.252080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.252090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.252264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.252273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.252373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.252384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.252510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.252520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.252707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.252717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.252835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.252864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.253008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.253037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.253260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.253290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.253511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.253540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.253685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.253714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.254029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.254058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.254342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.254372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.254486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.254498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.254695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.254721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.254990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.255019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.255243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.255273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.255427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.927 [2024-07-15 17:08:50.255456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.927 qpair failed and we were unable to recover it. 00:26:43.927 [2024-07-15 17:08:50.255671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.255700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.255914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.255943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.256148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.256177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.256482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.256513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.256719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.256748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.256900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.256927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.257105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.257114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.257287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.257297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.257471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.257482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.257604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.257614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.257758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.257768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.257880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.257889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.257998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.258007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.258182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.258192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.258304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.258314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.258490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.258500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.258700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.258710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.258873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.258883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.259108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.259118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.259285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.259295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.259543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.259553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.259712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.259721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.259921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.259931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.260021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.260030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.260190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.260200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.260435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.260445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.260675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.260685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.260851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.260861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.261031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.261041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.261245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.261256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.261449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.261459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.261629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.261639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.261810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.261820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.262047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.262057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.262231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.262242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.262430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.262442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.262621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.262631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.262806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.262816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.262914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.262927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.263111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.263121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.928 qpair failed and we were unable to recover it. 00:26:43.928 [2024-07-15 17:08:50.263304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.928 [2024-07-15 17:08:50.263314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.263493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.263503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.263705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.263715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.263870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.263880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.263972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.263981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.264139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.264149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.264337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.264347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.264521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.264530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.264728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.264738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.264842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.264853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.265009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.265019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.265137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.265146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.265347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.265357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.265535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.265545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.265640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.265650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.265899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.265909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.266154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.266164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.266346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.266356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.266607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.266617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.266807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.266817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.266988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.266998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.267154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.267164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.267392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.267402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.267624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.267634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.267832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.267842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.268091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.268101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.268285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.268295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.268388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.268397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.268572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.268582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.268747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.268757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.268856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.268865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.268990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.269001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.269148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.269158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.269331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.269341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.269550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.269560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.269732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.269744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.929 [2024-07-15 17:08:50.269911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.929 [2024-07-15 17:08:50.269921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.929 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.270088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.270097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.270220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.270238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.270393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.270403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.270577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.270587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.270752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.270762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.271022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.271032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.271281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.271291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.271537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.271547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.271645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.271655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.271850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.271860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.272047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.272057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.272211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.272221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.272438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.272448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.272624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.272634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.272743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.272753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.272862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.272872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.272974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.272985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.273167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.273177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.273293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.273304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.273528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.273538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.273697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.273707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.273819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.273829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.274012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.274022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.274136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.274146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.274312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.274322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.274482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.274492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.274659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.274669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.274827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.274836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.274929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.274938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.275113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.275123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.275330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.275340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.275592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.275622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.275922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.275950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.276170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.276200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.276399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.276467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.276755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.930 [2024-07-15 17:08:50.276789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.930 qpair failed and we were unable to recover it. 00:26:43.930 [2024-07-15 17:08:50.277100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.277131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.277263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.277296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.277563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.277601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.277762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.277776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.277893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.277907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.278072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.278086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.278317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.278331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.278449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.278462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.278636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.278649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.278828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.278858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.279048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.279077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.279272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.279303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.279485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.279498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.279686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.279716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.279881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.279911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.280140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.280170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.280475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.280506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.280706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.280742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.280979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.280993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.281221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.281240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.281360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.281373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.281552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.281566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.281743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.281757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.281867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.281881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.282010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.282024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.282151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.282165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.282300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.282314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.282435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.282448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.282581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.282594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.282702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.282716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.282841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.282854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.282974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.282987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.283177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.283190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.283366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.283380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.283611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.283624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.283750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.283763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.283884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.283898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.284145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.284158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.284398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.284412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.284535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.284549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.931 qpair failed and we were unable to recover it. 00:26:43.931 [2024-07-15 17:08:50.284727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.931 [2024-07-15 17:08:50.284741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.284934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.284948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.285136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.285152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.285377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.285391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.285581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.285610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.285755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.285785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.285999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.286029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.286302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.286334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.286555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.286585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.286831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.286860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.287148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.287177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.287349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.287380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.287619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.287649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.287853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.287882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.288030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.288060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.288336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.288380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.288562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.288591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.288877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.288890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.289014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.289027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.289208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.289221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.289533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.289563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.289752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.289766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.290007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.290020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.290283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.290297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.290525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.290539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.290767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.290781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.290970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.290983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.291174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.291188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.291442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.291456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.291586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.291599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.291783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.291796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.291924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.291937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.292184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.292198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.292458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.292488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.292649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.292678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.292896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.292925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.293216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.293235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.293465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.293479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.293656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.293669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.293798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.293812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.293991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.932 [2024-07-15 17:08:50.294004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.932 qpair failed and we were unable to recover it. 00:26:43.932 [2024-07-15 17:08:50.294287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.294319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.294626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.294661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.294819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.294848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.295042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.295071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.295309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.295340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.295527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.295556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.295823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.295852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.296005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.296035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.296329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.296360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.296515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.296547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.296757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.296770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.297000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.297013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.297289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.297304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.297435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.297449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.297641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.297655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.297773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.297786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.298034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.298063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.298280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.298311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.298453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.298483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.298686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.298716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.298925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.298956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.299157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.299188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.299374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.299404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.299639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.299668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.299829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.299858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.300057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.300087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.300275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.300306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.300509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.300539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.300733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0000 is same with the state(5) to be set 00:26:43.933 [2024-07-15 17:08:50.300951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.300977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.301148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.301160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.301377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.301388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.301576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.301607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.301808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.301837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.301979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.302008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.302141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.302171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.302332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.302363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.302616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.302647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.302866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.302896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.303115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.303144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.303373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.933 [2024-07-15 17:08:50.303403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.933 qpair failed and we were unable to recover it. 00:26:43.933 [2024-07-15 17:08:50.303717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.303747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.303966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.303977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.304239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.304249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.304423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.304434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.304623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.304653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.304883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.304912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.305038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.305068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.305312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.305343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.305551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.305580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.305818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.305827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.306001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.306030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.306184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.306214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.306382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.306411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.306624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.306654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.306924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.306959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.307185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.307215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.307454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.307484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.307632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.307642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.307794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.307803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.307967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.307977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.308071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.308084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.308243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.308254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.308362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.308372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.308497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.308506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.308625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.308635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.308794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.308804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.308928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.308938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.309048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.309058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.309295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.309305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.309426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.309436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.309552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.309562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.309719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.309729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.309812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.309821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.309921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.309930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.310123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.934 [2024-07-15 17:08:50.310133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.934 qpair failed and we were unable to recover it. 00:26:43.934 [2024-07-15 17:08:50.310322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.310353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.310515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.310545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.310693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.310722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.310870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.310899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.311057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.311086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.311303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.311333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.311611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.311641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.311853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.311882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.312018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.312048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.312256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.312286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.312558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.312587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.312794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.312835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.312989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.312999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.313196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.313238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.313410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.313439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.313734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.313764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.313958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.313968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.314128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.314138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.314236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.314246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.314491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.314505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.314732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.314742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.314834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.314843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.314955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.314965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.315107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.315117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.315356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.315366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.315465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.315474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.315629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.315639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.315718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.315727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.315826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.315835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.315945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.315955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.316047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.316057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.316228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.316238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.316461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.316471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.316630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.316640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.316733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.316741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.316941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.316951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.317117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.317127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.317358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.317388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.317541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.317571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.935 [2024-07-15 17:08:50.317725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.935 [2024-07-15 17:08:50.317753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.935 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.318012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.318022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.318158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.318168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.318418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.318448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.318643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.318673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.318977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.319007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.319175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.319185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.319366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.319376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.319465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.319475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.319589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.319599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.319720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.319731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.319858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.319868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.319975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.319985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.320149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.320158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.320317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.320329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.320423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.320433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.320528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.320538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.320706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.320715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.320819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.320829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.320994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.321003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.321196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.321210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.321306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.321316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.321432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.321442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.321557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.321567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.321675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.321685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.321864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.321876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.322060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.322070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.322232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.322243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.322339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.322349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.322452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.322462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.322556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.322565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.322655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.322666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.322827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.322836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.322926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.322935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.323105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.323115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.323273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.323284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.323377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.323386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.323507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.323517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.323689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.323699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.323866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.323877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.324029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.324038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.936 qpair failed and we were unable to recover it. 00:26:43.936 [2024-07-15 17:08:50.324132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.936 [2024-07-15 17:08:50.324141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.324253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.324264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.324423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.324432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.324636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.324647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.324884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.324914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.325123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.325153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.325373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.325404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.325538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.325567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.325721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.325756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.325938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.325948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.326122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.326133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.326318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.326330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.326503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.326513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.326669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.326680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.326837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.326846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.326952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.326962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.327135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.327145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.327274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.327284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.327452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.327462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.327580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.327592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.327682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.327691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.327808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.327818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.327932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.327943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.328030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.328039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.328212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.328222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.328336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.328346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.328508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.328518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.328720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.328730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.328839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.328850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.329031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.329041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.329199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.329208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.329405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.329436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.329590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.329619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.329760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.329790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.329942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.329971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.330106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.330116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.330289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.330299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.330393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.330402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.330521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.330531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.330621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.330631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.330739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.937 [2024-07-15 17:08:50.330749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.937 qpair failed and we were unable to recover it. 00:26:43.937 [2024-07-15 17:08:50.330853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.330863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.331026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.331036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.331205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.331241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.331456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.331485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.331650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.331680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.331909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.331938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.332154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.332184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.332335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.332366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.332538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.332568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.332704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.332734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.332964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.332993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.333316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.333346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.333547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.333577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.333770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.333780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.334009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.334038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.334195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.334235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.334450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.334479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.334619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.334648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.334801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.334835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.335043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.335079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.335241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.335252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.335491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.335501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.335672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.335702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.335905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.335935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.336150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.336180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.336368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.336399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.336547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.336576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.336778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.336807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.336944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.336973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.337188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.337218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.337444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.337474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.337615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.337625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.337797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.337807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.337913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.337923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.338018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.938 [2024-07-15 17:08:50.338028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.938 qpair failed and we were unable to recover it. 00:26:43.938 [2024-07-15 17:08:50.338112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.338124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.338286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.338295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.338405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.338414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.338511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.338520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.338624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.338634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.338735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.338745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.338989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.339000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.339101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.339112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.339218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.339232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.339361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.339370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.339467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.339476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.339578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.339589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.339697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.339707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.339798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.339808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.339918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.339927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.340039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.340049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.340166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.340177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.340266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.340276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.340451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.340461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.340545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.340553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.340653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.340662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.340761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.340771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.340897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.340907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.341017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.341030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.341131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.341141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.341233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.341243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.341420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.341430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.341588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.341600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.341705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.341715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.341890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.341900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.342059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.342069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.342156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.342166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.342323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.342333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.342436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.342446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.342615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.342627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.342788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.342798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.342903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.342914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.343087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.343117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.343268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.343298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.343512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.343541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.343735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.939 [2024-07-15 17:08:50.343745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.939 qpair failed and we were unable to recover it. 00:26:43.939 [2024-07-15 17:08:50.343839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.343850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.344076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.344086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.344195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.344205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.344376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.344387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.344489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.344499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.344613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.344623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.344790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.344800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.344955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.344965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.345077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.345086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.345190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.345200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.345301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.345313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.345419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.345428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.345548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.345558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.345659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.345671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.345770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.345780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.345879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.345889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.346033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.346043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.346133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.346142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.346235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.346245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.346426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.346436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.346661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.346672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.346900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.346910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.347046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.347059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.347166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.347176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.347279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.347290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.347374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.347383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.347491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.347502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.347594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.347603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.347697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.347707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.347824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.347834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.347928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.347938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.348033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.348043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.348287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.348297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.348465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.348475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.348590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.348600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.348710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.348721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.348815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.348824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.348979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.348990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.349093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.349103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.349277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.349287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.940 [2024-07-15 17:08:50.349392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.940 [2024-07-15 17:08:50.349402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.940 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.349634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.349644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.349770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.349780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.349946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.349957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.350058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.350068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.350168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.350178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.350270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.350280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.350387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.350397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.350506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.350516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.350678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.350688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.350792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.350802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.350907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.350918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.351084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.351093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.351200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.351210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.351339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.351349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.351516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.351526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.351712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.351723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.351816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.351827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.351984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.351994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.352082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.352092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.352237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.352248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.352338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.352348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.352447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.352460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.352645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.352678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.352836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.352846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.353015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.353046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.353188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.353217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.353438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.353468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.353610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.353639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.353759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.353769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.353923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.353933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.354093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.354103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.354220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.354234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.354344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.354355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.354477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.354487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.354586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.354595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.354755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.354766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.354938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.354948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.355057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.355068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.355171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.355181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.941 [2024-07-15 17:08:50.355287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.941 [2024-07-15 17:08:50.355297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.941 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.355387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.355397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.355517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.355527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.355628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.355637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.355749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.355759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.355936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.355946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.356050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.356060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.356221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.356233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.356442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.356452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.356556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.356567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.356664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.356674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.356830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.356840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.356946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.356956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.357178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.357188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.357298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.357308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.357489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.357499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.357698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.357727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.357875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.357886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.357996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.358006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.358104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.358114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.358272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.358282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.358373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.358383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.358473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.358485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.358597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.358607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.358699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.358708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.358815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.358825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.358919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.358929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.359109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.359119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.359273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.359283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.359477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.359488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.359719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.359748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.359879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.359908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.360120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.360149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.360339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.360356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.360473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.360487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.360604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.360614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.942 qpair failed and we were unable to recover it. 00:26:43.942 [2024-07-15 17:08:50.360794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.942 [2024-07-15 17:08:50.360803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.360910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.360919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.361076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.361086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.361190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.361201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.361309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.361320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.361484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.361518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.361715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.361744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.361879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.361908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.362197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.362207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.362385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.362397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.362504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.362514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.362586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.362596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.362696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.362706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.362799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.362810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.362906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.362916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.363008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.363017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.363171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.363181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.363393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.363403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.363505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.363515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.363615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.363625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.363802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.363812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.363988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.363998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.364114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.364123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.364280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.364291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.364400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.364410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.364644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.364653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.364762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.364773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.364944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.364954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.365132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.365161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.365385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.365416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.365562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.365591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.365739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.365749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.365861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.365871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.365959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.365969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.366180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.366190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.366354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.366365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.366475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.366485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.366652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.366661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.366857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.366867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.367105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.943 [2024-07-15 17:08:50.367115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.943 qpair failed and we were unable to recover it. 00:26:43.943 [2024-07-15 17:08:50.367287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.367298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.367418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.367454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.367692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.367722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.367861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.367890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.368023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.368064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.368234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.368245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.368412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.368441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.368577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.368606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.368719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.368748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.368902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.368942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.369042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.369051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.369145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.369154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.369255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.369265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.369362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.369371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.369537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.369547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.369708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.369718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.369875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.369904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.370114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.370144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.370303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.370334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.370557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.370586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.370785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.370815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.370926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.370955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.371156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.371186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.371335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.371367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.371573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.371602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.371832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.371861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.372107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.372142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.372290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.372322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.372460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.372490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.372644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.372674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.372806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.372835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.372965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.372975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.373132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.373143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.373242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.373253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.373345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.373355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.373460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.373470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.373626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.373636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.373737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.373747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.373901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.373910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.374068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.374078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.944 [2024-07-15 17:08:50.374297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.944 [2024-07-15 17:08:50.374328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.944 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.374529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.374559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.374777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.374807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.375004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.375033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.375161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.375203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.375349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.375360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.375521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.375531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.375638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.375649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.375816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.375826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.375993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.376003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.376170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.376179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.376294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.376305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.376394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.376403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.376500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.376510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.376677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.376688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.376866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.376876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.376975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.376987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.377112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.377122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.377232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.377243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.377311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.377320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.377414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.377423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.377581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.377591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.377667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.377676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.377855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.377865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.377974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.377984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.378097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.378106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.378201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.378211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.378322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.378333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.378512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.378522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.378627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.378637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.378736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.378747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.378909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.378919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.379077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.379087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.379188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.379197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.379295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.379305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.379395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.379405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.379497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.379507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.379607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.379617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.379725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.379735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.379906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.379935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.945 [2024-07-15 17:08:50.380146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.945 [2024-07-15 17:08:50.380176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.945 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.380397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.380428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.380587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.380617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.380780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.380809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.381032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.381043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.381149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.381159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.381371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.381381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.381546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.381556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.381710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.381748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.381880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.381910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.382043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.382073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.382291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.382323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.382463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.382493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.382630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.382665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.382808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.382837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.382985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.383015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.383215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.383239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.383496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.383506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.383597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.383606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.383700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.383710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.383799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.383808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.383916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.383926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.384023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.384034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.384125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.384135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.384243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.384253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.384431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.384441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.384552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.384561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.384653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.384663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.384812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.384822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.384995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.385005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.385128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.385138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.385253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.385264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.385351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.385360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.385583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.385593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.385692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.385702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.385807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.385817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.385988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.386000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.386183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.386192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.386367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.386378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.386552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.386562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.386649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.386658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.946 qpair failed and we were unable to recover it. 00:26:43.946 [2024-07-15 17:08:50.386761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.946 [2024-07-15 17:08:50.386770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.386862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.386872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.386980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.386990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.387083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.387092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.387262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.387273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.387458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.387468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.387559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.387568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.387730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.387740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.387832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.387841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.388009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.388019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.388182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.388192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.388308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.388319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.388422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.388433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.388592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.388602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.388695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.388704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.388828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.388838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.388930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.388940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.389112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.389122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.389209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.389218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.389317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.389326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.389486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.389496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.389671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.389681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.389784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.389794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.389885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.389894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.389995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.390005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.390160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.390170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.390265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.390275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.390452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.390463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.390567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.390577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.390670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.390680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.390889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.390898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.391053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.391063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.391242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.391252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.391367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.391377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.947 [2024-07-15 17:08:50.391542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.947 [2024-07-15 17:08:50.391552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.947 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.391647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.391656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.391818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.391828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.391922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.391935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.392027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.392036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.392138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.392149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.392312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.392328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.392440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.392450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.392616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.392626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.392714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.392723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.392875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.392885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.392977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.392987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.393145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.393155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.393246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.393256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.393409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.393419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.393520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.393530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.393689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.393699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.393866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.393876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.394045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.394057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.394163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.394173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.394349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.394359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.394525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.394535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.394627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.394637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.394795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.394804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.394967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.394976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.395082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.395093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.395294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.395304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.395397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.395407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.395496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.395505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.395615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.395625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.395715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.395724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.395963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.395973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.396129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.396140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.396260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.396271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.396467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.396476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.396596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.396606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.396711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.396721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.396886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.396896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.396997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.397006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.397102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.397112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.397219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.948 [2024-07-15 17:08:50.397234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.948 qpair failed and we were unable to recover it. 00:26:43.948 [2024-07-15 17:08:50.397342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.397353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.397520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.397530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.397697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.397707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.397816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.397826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.397923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.397933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.398088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.398098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.398269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.398280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.398372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.398381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.398472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.398482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.398577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.398587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.398678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.398687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.398779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.398789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.399031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.399041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.399196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.399206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.399310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.399320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.399414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.399424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.399601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.399611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.399710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.399722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.399843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.399853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.400020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.400030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.400177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.400187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.400344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.400354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.400493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.400503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.400616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.400626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.400810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.400819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.400939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.400949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.401072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.401082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.401251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.401261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.401415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.401425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.401527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.401537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.401631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.401641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.401762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.401772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.401860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.401869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.402036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.402046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.402214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.402227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.402406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.402416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.402577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.402587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.402763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.402773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.402948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.402958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.403112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.403122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.949 [2024-07-15 17:08:50.403245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.949 [2024-07-15 17:08:50.403255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.949 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.403349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.403359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.403464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.403474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.403600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.403609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.403780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.403790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.403881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.403891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.403981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.403990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.404147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.404157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.404324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.404335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.404437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.404448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.404549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.404560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.404753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.404763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.404988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.404998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.405166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.405177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.405404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.405415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.405659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.405669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.405845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.405855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.405955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.405967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.406144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.406154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.406261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.406272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.406367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.406377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.406584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.406594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.406771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.406781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.406934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.406944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.407120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.407130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.407384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.407394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.407509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.407519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.407759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.407769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.407872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.407882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.407991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.408001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.408181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.408191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.408310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.408321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.408409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.408421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.408589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.408599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.408780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.408790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.408896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.408906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.409055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.409066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.409234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.409245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.409422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.409432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.409628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.409640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.409752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.950 [2024-07-15 17:08:50.409763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.950 qpair failed and we were unable to recover it. 00:26:43.950 [2024-07-15 17:08:50.409856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.409866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.409962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.409972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.410086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.410096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.410212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.410223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.410318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.410330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.410508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.410520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.410762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.410773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.410943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.410954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.411131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.411142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.411318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.411329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.411567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.411577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.411772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.411783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.412074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.412084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.412265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.412276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.412383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.412393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.412554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.412565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.412772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.412785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.412963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.412973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.413200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.413210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.413465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.413476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.413746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.413756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.414010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.414020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.414186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.414196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.414363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.414373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.414552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.414562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.414719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.414729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.414849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.414860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.415024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.415034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.415144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.415154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.415266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.415277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.415381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.415392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.415495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.415505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.415635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.415645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.415831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.415842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.416139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.951 [2024-07-15 17:08:50.416149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.951 qpair failed and we were unable to recover it. 00:26:43.951 [2024-07-15 17:08:50.416397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.416407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.416563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.416573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.416669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.416679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.416791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.416800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.416972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.416982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.417079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.417089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.417279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.417290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.417502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.417512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.417632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.417642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.417908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.417918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.418168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.418180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.418361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.418372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.418533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.418543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.418718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.418729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.418857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.418867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.419060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.419070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.419187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.419197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.419315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.419326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.419472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.419482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.419638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.419647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.419816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.419826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.419935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.419947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.420142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.420152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.420323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.420334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.420463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.420473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.420694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.420704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.420816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.420826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.421005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.421015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.421175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.421186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.421350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.421361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.421484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.421494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.421596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.421606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.421707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.421717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.421886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.421896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.422074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.422085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.422296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.422307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.422482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.422492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.422663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.422673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.422781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.422791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.422960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.422970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.952 [2024-07-15 17:08:50.423150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.952 [2024-07-15 17:08:50.423161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.952 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.423341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.423351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.423470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.423480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.423581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.423591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.423701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.423711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.423908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.423918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.424084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.424094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.424267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.424278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.424407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.424418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.424587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.424597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.424718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.424729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.424844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.424854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.425024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.425034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.425154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.425165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.425274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.425284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.425355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.425365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.425528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.425538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.425709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.425719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.425884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.425894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.426129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.426140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.426312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.426322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.426485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.426498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.426611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.426621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.426771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.426781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.426875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.426884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.426999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.427010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.427174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.427185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.427306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.427316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.427484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.427494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.427719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.427729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.427842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.427852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.428041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.428052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.428220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.428236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.428403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.428413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.428573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.428583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.428675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.428685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.428856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.428867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.429033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.429043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.429211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.429221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.429383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.429393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.429502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.429512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.953 qpair failed and we were unable to recover it. 00:26:43.953 [2024-07-15 17:08:50.429609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.953 [2024-07-15 17:08:50.429619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.429729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.429739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.429936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.429946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.430111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.430122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.430285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.430295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.430413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.430423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.430589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.430599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.430895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.430929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.431195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.431210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.431421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.431436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.431674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.431688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.431807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.431821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.432102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.432116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.432405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.432420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.432548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.432561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.432727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.432740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.433004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.433018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.433272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.433287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.433439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.433453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.433625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.433639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.433913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.433926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.434180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.434193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.434426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.434439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.434619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.434632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.434893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.434907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.435079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.435092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.435266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.435281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.435430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.435443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.435605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.435618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.435802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.435815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.435994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.436008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.436217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.436236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.436473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.436487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.436655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.436668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.436786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.436802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.436967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.436981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.437254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.437269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.437394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.437407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.437520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.437534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.437787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.437800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.437901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.437914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.954 [2024-07-15 17:08:50.438087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.954 [2024-07-15 17:08:50.438101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.954 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.438281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.438295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.438550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.438564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.438747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.438761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.438941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.438954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.439084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.439098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.439372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.439386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.439619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.439632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.439759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.439772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.439944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.439957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.440217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.440235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.440491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.440505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.440677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.440691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.440922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.440936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.441063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.441077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.441265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.441275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.441441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.441452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.441619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.441630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.441802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.441812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.442005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.442016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.442274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.442287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.442515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.442525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.442689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.442699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.442858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.442868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.443103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.443114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.443396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.443407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.443563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.443574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.443801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.443811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.444053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.444063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.444334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.444345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.444579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.444590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.444790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.444800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.444999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.445009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.445185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.445195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.445417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.445427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.955 qpair failed and we were unable to recover it. 00:26:43.955 [2024-07-15 17:08:50.445610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.955 [2024-07-15 17:08:50.445620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.445746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.445756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.445882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.445893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.446071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.446081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.446238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.446250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.446413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.446423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.446534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.446545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.446692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.446702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.446819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.446829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.447068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.447078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.447299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.447309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.447430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.447440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.447627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.447644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.447812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.447826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.448042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.448056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.448238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.448252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.448389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.448402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.448531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.448545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.448678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.448692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.448796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.448810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.449106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.449120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.449316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.449328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.449512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.449522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.449700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.449710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.449912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.449922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.450095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.450106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.450314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.450325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.450547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.450558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.450674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.450683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.450985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.450995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.451097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.451107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.451333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.451344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.451523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.451533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.451760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.451771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.451981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.451991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.452179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.452189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.452371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.452382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.452552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.452562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.452746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.452756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.452887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.452897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.453123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.956 [2024-07-15 17:08:50.453133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.956 qpair failed and we were unable to recover it. 00:26:43.956 [2024-07-15 17:08:50.453346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.453357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.453514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.453524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.453634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.453644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.453769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.453779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.454050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.454060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.454255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.454265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.454498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.454509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.454683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.454693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.454860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.454870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.455063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.455073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.455189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.455199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.455437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.455450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.455638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.455648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.455806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.455817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.455996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.456006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.456181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.456191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.456360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.456371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.456548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.456558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.456724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.456734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.456992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.457003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.457250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.457261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.457400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.457411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.457664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.457674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.457785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.457795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.458076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.458087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.458322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.458332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.458557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.458567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.458791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.458801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.458919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.458929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.459103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.459114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.459218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.459234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.459418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.459428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.459600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.459610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.459723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.459733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.459965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.459975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.460214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.460233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.460347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.460357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.460517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.460527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.460705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.460715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.460904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.460915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.957 [2024-07-15 17:08:50.461077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.957 [2024-07-15 17:08:50.461087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.957 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.461247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.461258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.461364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.461374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.461564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.461574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.461774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.461785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.461957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.461967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.462138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.462148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.462378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.462389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.462516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.462526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.462775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.462785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.462876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.462885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.463050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.463061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.463296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.463306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.463499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.463509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.463636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.463646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.463746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.463756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.463852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.463862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.464107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.464117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.464345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.464356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.464485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.464495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.464606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.464616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.464807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.464817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.465095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.465105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.465352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.465363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.465538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.465549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.465755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.465765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.466040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.466050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.466254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.466264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.466517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.466528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.466685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.466695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.466838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.466848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.467025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.467035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.467202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.467212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.467421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.467431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.467604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.467614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.467739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.467749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.467867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.467877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.468066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.468075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.468324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.468336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.468445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.468455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.468675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.958 [2024-07-15 17:08:50.468685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.958 qpair failed and we were unable to recover it. 00:26:43.958 [2024-07-15 17:08:50.468787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.468797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.468996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.469005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.469213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.469228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.469339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.469349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.469544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.469554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.469715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.469725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.469936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.469946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.470167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.470178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.470416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.470427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.470559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.470569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.470693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.470705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.470949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.470960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.471181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.471191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.471416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.471427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.471600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.471610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.471737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.471746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.471914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.471924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.472113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.472123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.472302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.472312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.472439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.472449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.472558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.472568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.472731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.472740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.472948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.472957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.473049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.473058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.473304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.473314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.473484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.473494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.473671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.473681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.473938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.473948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.474170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.474181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.474350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.474360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.474473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.474483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.474659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.474669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.474909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.474919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.475187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.475197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.475367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.475377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.475584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.475594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.475776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.475786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.476042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.476052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.476173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.476183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.476399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.476409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.476575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.959 [2024-07-15 17:08:50.476585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.959 qpair failed and we were unable to recover it. 00:26:43.959 [2024-07-15 17:08:50.476721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.476731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.476909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.476919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.477138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.477148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.477397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.477409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.477525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.477535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.477656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.477666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.477942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.477952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.478120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.478130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.478378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.478388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.478567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.478579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.478750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.478760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.478940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.478950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.479192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.479202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.479382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.479393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.479622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.479632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.479799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.479809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.480010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.480020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.480130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.480140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.480306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.480317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.480427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.480437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.480561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.480571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.480743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.480753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.480943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.480953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.481123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.481134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.481305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.481316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.481440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.481450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.481559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.481569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.481794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.481804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.481988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.481998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.482264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.482274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.482498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.482507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.482679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.482689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.482850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.482860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.483073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.483083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.960 [2024-07-15 17:08:50.483186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.960 [2024-07-15 17:08:50.483195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.960 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.483368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.483378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.483604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.483614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.483816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.483826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.484002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.484012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.484263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.484273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.484498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.484508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.484773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.484783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.484910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.484920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.485078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.485088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.485272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.485283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.485458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.485468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.485644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.485654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.485829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.485839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.486008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.486018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.486202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.486214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.486384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.486395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.486508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.486518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.486647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.486658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.486828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.486839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.486973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.486984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.487236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.487246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.487372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.487382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.487510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.487520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.487749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.487759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.487882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.487892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.488115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.488125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.488364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.488395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.488562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.488591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.488752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.488782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.489096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.489133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.489299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.489309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.489529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.489539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.489776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.489806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.490076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.490104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.490424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.490455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.490675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.490705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.490854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.490884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.491104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.491114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.491277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.491287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.961 [2024-07-15 17:08:50.491376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.961 [2024-07-15 17:08:50.491385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.961 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.491504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.491514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.491652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.491685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.491884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.491900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.492189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.492203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.492393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.492409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.492636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.492650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.492785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.492799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.492981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.492995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.493175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.493189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.493425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.493440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.493556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.493568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.493675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.493685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.493863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.493873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.493979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.493989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.494157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.494169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.494348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.494359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.494529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.494539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.494715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.494725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.495021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.495050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.495277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.495307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.495549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.495579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.495791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.495821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.496045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.496074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.496296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.496307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.496504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.496514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.496675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.496684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.496966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.496995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.497211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.497252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.497477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.497507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.497665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.497694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.498015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.498053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.498166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.498176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.498350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.498360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.498535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.498545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.498709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.498719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.498940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.498950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.499047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.499058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.499289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.499300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.499547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.499557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.499735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.499745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.499917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.499925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.500051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.500060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.500158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.500167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.962 [2024-07-15 17:08:50.500386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.962 [2024-07-15 17:08:50.500395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.962 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.500516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.500525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.500674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.500682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.500850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.500859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.501137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.501145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.501378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.501387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.501502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.501512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.501669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.501678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.501852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.501861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.501978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.501987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.502212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.502221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.502353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.502364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.502519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.502527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.502653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.502663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.502950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.502959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.503125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.503134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.503313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.503322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.503463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.503472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.503598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.503607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.503766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.503774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.504088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.504097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.504405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.504414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.504544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.504553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.504713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.504723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.504916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.504925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.505175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.505185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.505414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.505424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.505661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.505670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.505787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.505797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.505958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.505967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.506199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.506209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.506350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.506361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.506483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.506492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.506626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.506636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.506811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.506822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.507002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.507012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.507122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.507132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.507368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.507379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.507555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.507565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.507732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.507742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.508019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.508029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.508201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.508210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.508386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.508397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.508577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.508587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.508713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.963 [2024-07-15 17:08:50.508723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.963 qpair failed and we were unable to recover it. 00:26:43.963 [2024-07-15 17:08:50.508840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.508850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.509073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.509083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.509254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.509265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.509392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.509402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.509503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.509513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.509615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.509625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.509804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.509816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.510087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.510096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.510321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.510331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.510444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.510454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.510576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.510587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.510702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.510712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.510846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.510856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.511094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.511105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.511264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.511275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.511451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.511461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.511636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.511646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.511807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.511817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.511941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.511951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.512205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.512215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.512379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.512390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.512578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.512588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.512767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.512777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.513017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.513026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.513276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.513286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.513464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.513474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.513618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.513647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.513862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.513891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.514110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.514139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.514415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.514425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.514589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.514599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.514899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.514929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.515149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.515179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.515473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.515504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.515688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.515703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.515919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.515949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.964 [2024-07-15 17:08:50.516253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.964 [2024-07-15 17:08:50.516285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.964 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.516507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.516521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.516725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.516739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.516926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.516941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.517107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.517120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.517379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.517394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.517524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.517537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.517665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.517679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.517799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.517812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.518060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.518092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.518303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.518340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.518492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.518521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.518743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.518753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.519022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.519032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.519195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.519205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.519385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.519395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.519507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.519517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.519763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.519773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.519954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.519965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.520085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.520123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.520347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.520377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.520666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.520695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.520914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.520944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.521247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.521276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.521463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.521473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.521591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.521601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.521798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.521808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.521919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.521929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.522100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.522111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.522334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.522345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.522448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.522458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.522570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.522580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.522701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.522711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.522887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.522897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.523154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.523164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.523390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.523400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.965 qpair failed and we were unable to recover it. 00:26:43.965 [2024-07-15 17:08:50.523573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.965 [2024-07-15 17:08:50.523583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.523813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.523843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.524178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.524207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.524549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.524559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.524778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.524808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.525075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.525104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.525343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.525353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.525473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.525483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.525607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.525618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.525803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.525813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.525924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.525934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.526105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.526115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.526286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.526297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.526564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.526575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.526755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.526767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.526955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.526983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.527250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.527281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.527514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.527543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.527807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.527837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.528065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.528094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.528359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.528389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.528698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.528727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.529024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.529054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.529345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.529356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.529578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.529588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.529856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.529866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.530125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.530135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.530302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.530312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.530504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.530533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.530823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.530852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.531051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.531080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.531300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.531330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.531493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.531503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.531726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.531756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.531917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.531947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.532164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.532193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.532517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.966 [2024-07-15 17:08:50.532527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.966 qpair failed and we were unable to recover it. 00:26:43.966 [2024-07-15 17:08:50.532757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.532777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.533064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.533074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.533300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.533311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.533488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.533499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.533618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.533628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.533811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.533821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.533999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.534010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.534139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.534149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.534393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.534404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.534524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.534534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.534760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.534789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.535040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.535070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.535284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.535314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.535529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.535540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.535812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.535822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.536075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.536084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.536203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.536213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.536475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.536506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.536733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.536762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.536985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.537014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.537327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.537337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.537530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.537540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.537713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.537723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.537975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.537985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.538088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.538098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.538272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.538283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.538467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.538498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.538801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.538830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.539055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.539084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.539286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.539319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.539546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.539556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.539718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.539739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.539954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.539984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.540278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.540308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.540626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.540638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.967 [2024-07-15 17:08:50.540957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.967 [2024-07-15 17:08:50.540987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.967 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.541245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.541276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.541557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.541587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.541828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.541857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.542097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.542126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.542412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.542422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.542545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.542555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.542746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.542756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.542966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.542976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.543221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.543240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.543434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.543444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.543601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.543611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.543805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.543826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.543989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.544018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.544336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.544366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.544581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.544591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.544725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.544736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.544964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.544994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.545210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.545248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.545499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.545528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.545742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.545772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.546051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.546081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.546336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.546367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.546704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.546734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.546978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.547007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.547179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.547208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.547476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.547486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.547603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.547613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.547854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.547864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.548040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.548050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.548284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.548295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.548559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.548569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.548763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.548774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.549025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.549035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.968 [2024-07-15 17:08:50.549261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.968 [2024-07-15 17:08:50.549272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.968 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.549440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.549450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.549629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.549659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.549873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.549903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.550142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.550172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.550484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.550494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.550735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.550745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.550875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.550885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.551143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.551153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.551406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.551438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.551704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.551733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.551977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.552007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.552210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.552249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.552543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.552572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.552810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.552839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.553123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.553157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.553348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.553358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.553522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.553532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.553689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.553699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.553946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.553975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.554199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.554251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.554477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.554486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.554586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.554597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.554702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.554712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.554874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.554884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.555126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.969 [2024-07-15 17:08:50.555135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.969 qpair failed and we were unable to recover it. 00:26:43.969 [2024-07-15 17:08:50.555241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.555252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.555464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.555475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.555579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.555589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.555875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.555904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.556132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.556161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.556421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.556455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.556578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.556588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.556787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.556822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.557022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.557051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.557282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.557292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.557554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.557564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.557732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.557742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.557930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.557959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.558174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.558202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.558491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.558521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.558744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.558774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.559061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.559091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.559371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.559382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.559559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.559569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.559749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.559779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.559998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.560027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.560162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.560191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.560432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.560442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.560688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.560698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.560870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.560880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.561059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.561068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.561344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.561355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.561546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.561556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.561826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.561854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.562056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.562091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.562420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.562431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.562626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.562636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.562860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.562870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.563040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.563051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.563244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.563255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.563432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.563442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.563553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.563563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.563687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.563697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.563825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.563835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.564097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.564106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.970 [2024-07-15 17:08:50.564287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.970 [2024-07-15 17:08:50.564297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.970 qpair failed and we were unable to recover it. 00:26:43.971 [2024-07-15 17:08:50.564521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.971 [2024-07-15 17:08:50.564531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.971 qpair failed and we were unable to recover it. 00:26:43.971 [2024-07-15 17:08:50.564767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.971 [2024-07-15 17:08:50.564778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.971 qpair failed and we were unable to recover it. 00:26:43.971 [2024-07-15 17:08:50.564909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.971 [2024-07-15 17:08:50.564919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.971 qpair failed and we were unable to recover it. 00:26:43.971 [2024-07-15 17:08:50.565188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.971 [2024-07-15 17:08:50.565217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.971 qpair failed and we were unable to recover it. 00:26:43.971 [2024-07-15 17:08:50.565518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.971 [2024-07-15 17:08:50.565547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.971 qpair failed and we were unable to recover it. 00:26:43.971 [2024-07-15 17:08:50.565704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.971 [2024-07-15 17:08:50.565733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.971 qpair failed and we were unable to recover it. 00:26:43.971 [2024-07-15 17:08:50.565977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.971 [2024-07-15 17:08:50.566006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:43.971 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.566233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.566264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.566471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.566502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.566726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.566756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.566946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.566975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.567252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.567283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.567483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.567493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.567715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.567725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.567893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.567903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.568111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.568140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.568365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.568396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.568626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.568655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.568997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.569026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.569252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.569283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.569499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.569528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.569750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.569780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.570050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.252 [2024-07-15 17:08:50.570079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.252 qpair failed and we were unable to recover it. 00:26:44.252 [2024-07-15 17:08:50.570278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.570289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.570464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.570474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.570584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.570594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.570787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.570796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.570891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.570900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.571081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.571092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.571248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.571259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.571479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.571507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.571732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.571761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.572045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.572074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.572309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.572341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.572559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.572588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.572850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.572860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.573132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.573142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.573331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.573341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.573517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.573527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.573787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.573816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.574070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.574099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.574344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.574375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.574530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.574559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.574781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.574810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.575105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.575134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.575358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.575389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.575679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.575708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.576025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.576054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.576190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.576219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.576407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.576417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.576516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.576525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.576717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.576726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.576990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.576999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.577285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.577295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.577490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.577500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.577750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.577761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.578133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.578143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.578405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.578436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.578651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.578680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.579025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.579054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.579334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.253 [2024-07-15 17:08:50.579364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.253 qpair failed and we were unable to recover it. 00:26:44.253 [2024-07-15 17:08:50.579523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.579553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.579763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.579773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.579980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.579990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.580199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.580208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.580405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.580415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.580688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.580716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.580952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.580981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.581255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.581296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.581458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.581468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.581589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.581599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.581785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.581795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.581994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.582024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.582310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.582341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.582605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.582634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.582854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.582883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.583094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.583123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.583351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.583361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.583563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.583572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.583820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.583830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.584040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.584050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.584221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.584236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.584444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.584454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.584755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.584784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.585063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.585092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.585353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.585396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.585525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.585535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.585716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.585726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.585958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.585987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.586314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.586325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.586517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.586547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.586765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.586794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.587106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.587135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.587409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.587439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.254 [2024-07-15 17:08:50.587654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.254 [2024-07-15 17:08:50.587683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.254 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.587847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.587877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.588171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.588199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.588407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.588475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.588716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.588749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.589060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.589091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.589252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.589284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.589510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.589540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.589746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.589760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.589988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.590002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.590132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.590145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.590390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.590421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.590635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.590665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.590959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.590989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.591283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.591322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.591495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.591525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.591731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.591760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.591992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.592021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.592236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.592268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.592488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.592519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.592815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.592829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.593019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.593033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.593289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.593303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.593497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.593510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.593679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.593693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.593959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.593988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.594156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.594185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.594487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.594518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.594681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.594694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.594831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.594844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.595016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.595030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.595156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.595169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.595353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.595367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.595585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.595599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.595788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.595801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.255 [2024-07-15 17:08:50.595997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.255 [2024-07-15 17:08:50.596027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.255 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.596274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.596304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.596524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.596554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.596754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.596784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.597009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.597038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.597316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.597346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.597599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.597629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.597771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.597785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.597993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.598022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.598301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.598333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.598537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.598566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.598714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.598728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.598943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.598972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.599170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.599199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.599504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.599535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.599751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.599780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.600026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.600055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.600296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.600327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.600617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.600646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.600854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.600889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.601155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.601185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.601507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.601537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.601795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.601809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.602076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.602089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.602214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.602234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.602417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.602431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.602601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.602614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.602731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.602745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.603016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.603045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.603311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.603342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.603515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.603544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.603790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.603804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.603987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.604001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.604219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.604237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.604489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.604503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.604670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.604683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.604868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.604897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.605164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.605194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.605417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.605447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.605699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.605730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.256 qpair failed and we were unable to recover it. 00:26:44.256 [2024-07-15 17:08:50.606006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.256 [2024-07-15 17:08:50.606036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.606321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.606351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.606553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.606583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.606792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.606821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.607040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.607070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.607278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.607308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.607581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.607608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.607872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.607883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.608120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.608131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.608292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.608303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.608500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.608530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.608750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.608780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.609054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.609084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.609352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.609382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.609602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.609632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.609920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.609950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.610246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.610277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.610440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.610469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.610684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.610695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.610917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.610931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.611124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.611134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.611375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.611385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.611581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.611592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.611777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.611787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.612021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.612050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.612271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.257 [2024-07-15 17:08:50.612302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.257 qpair failed and we were unable to recover it. 00:26:44.257 [2024-07-15 17:08:50.612516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.612546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.612763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.612793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.613014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.613043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.613313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.613344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.613608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.613618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.613820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.613830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.614073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.614083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.614320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.614331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.614509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.614519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.614747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.614777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.614993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.615023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.615246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.615278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.615500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.615541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.615768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.615778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.615980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.615990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.616187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.616197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.616307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.616316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.616544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.616554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.616732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.616743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.616951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.616961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.617213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.617253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.617494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.617524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.617751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.617780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.618066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.618095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.618338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.618369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.618585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.618595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.618724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.618734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.618898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.618908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.619160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.619170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.619377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.619388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.258 [2024-07-15 17:08:50.619561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.258 [2024-07-15 17:08:50.619571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.258 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.619752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.619761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.620024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.620053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.620274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.620311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.620487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.620517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.620672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.620682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.620961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.620991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.621128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.621157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.621383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.621413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.621571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.621601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.621770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.621799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.622094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.622123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.622344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.622374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.622598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.622628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.622887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.622916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.623202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.623242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.623420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.623450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.623632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.623661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.623799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.623810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.624030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.624059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.624275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.624306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.624528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.624557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.624712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.624721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.624946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.624976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.625241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.625272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.625562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.625592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.625758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.625768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.625983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.625994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.626149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.626160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.626326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.626336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.626443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.626453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.626633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.626644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.626754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.626764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.626928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.626939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.627104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.627114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.259 [2024-07-15 17:08:50.627292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.259 [2024-07-15 17:08:50.627303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.259 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.627406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.627416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.627586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.627596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.627821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.627831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.627942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.627952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.628201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.628211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.628419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.628430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.628681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.628691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.628886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.628898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.629185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.629195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.629376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.629387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.629652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.629662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.629987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.629998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.630166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.630179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.630375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.630386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.630639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.630649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.630808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.630818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.631059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.631069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.631320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.631331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.631527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.631537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.631650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.631660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.631767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.631777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.631892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.631902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.632096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.632107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.632279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.632289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.632563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.632573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.632813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.632823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.633057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.633068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.633319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.633330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.633439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.260 [2024-07-15 17:08:50.633450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.260 qpair failed and we were unable to recover it. 00:26:44.260 [2024-07-15 17:08:50.633565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.633575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.633821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.633831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.634012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.634022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.634275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.634286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.634509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.634519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.634645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.634656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.634873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.634883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.635072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.635082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.635258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.635270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.635393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.635403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.635514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.635524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.635725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.635735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.635868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.635878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.636105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.636115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.636280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.636290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.636556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.636566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.636745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.636755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.636986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.636997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.637221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.637238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.637412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.637422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.637535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.637546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.637785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.637795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.637956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.637966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.638074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.638085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.638313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.638325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.638438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.638449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.638558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.638568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.638793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.638804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.638938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.638948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.639146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.639155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.639325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.639335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.639427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.639436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.639624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.639634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.639807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.639818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.640019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.640029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.640319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.640331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.640509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.640519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.640629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.640640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.640762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.640773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.641027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.641037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.261 [2024-07-15 17:08:50.641259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.261 [2024-07-15 17:08:50.641270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.261 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.641523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.641534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.641697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.641707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.641937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.641947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.642165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.642175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.642482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.642518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.642784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.642800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.643035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.643049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.643240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.643255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.643441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.643455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.643582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.643595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.643875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.643890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.644069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.644083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.644268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.644282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.644454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.644466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.644635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.644645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.644819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.644829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.644940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.644950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.645150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.645160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.645398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.645409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.645532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.645542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.645665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.262 [2024-07-15 17:08:50.645675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.262 qpair failed and we were unable to recover it. 00:26:44.262 [2024-07-15 17:08:50.645841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.645852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.646075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.646085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.646189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.646199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.646451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.646462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.646651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.646661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.646916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.646926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.647108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.647119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.647345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.647356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.647520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.647530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.647706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.647716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.647890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.647900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.648180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.648190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.648368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.648378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.648563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.648573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.648760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.648769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.649062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.649073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.649251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.649262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.649383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.649393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.649567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.649577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.649683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.649694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.649923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.649933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.650099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.650109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.650351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.650362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.650543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.650554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.650728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.650738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.651012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.651022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.651246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.651257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.651378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.651388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.651510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.651520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.651620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.651630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.651889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.651901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.652077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.652087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.652274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.652284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.652462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.652472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.652600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.652610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.652724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.652735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.652845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.652855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.653112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.653123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.653235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.653245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.263 qpair failed and we were unable to recover it. 00:26:44.263 [2024-07-15 17:08:50.653468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.263 [2024-07-15 17:08:50.653478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.653588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.653598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.653705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.653716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.653899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.653910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.654077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.654087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.654334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.654344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.654465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.654476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.654662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.654673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.654792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.654802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.654989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.654999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.655168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.655179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.655351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.655362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.655473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.655483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.655637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.655646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.655779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.655789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.655994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.656004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.656200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.656221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.656434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.656445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.656615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.656625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.656788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.656798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.656991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.657002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.657167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.657177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.657333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.657343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.657516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.657526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.657632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.657644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.657748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.657757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.657865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.657875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.657996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.658006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.658164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.658174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.658285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.264 [2024-07-15 17:08:50.658295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.264 qpair failed and we were unable to recover it. 00:26:44.264 [2024-07-15 17:08:50.658464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.658474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.658592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.658602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.658756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.658766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.658944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.658954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.659125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.659135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.659417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.659428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.659531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.659541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.659737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.659747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.659861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.659872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.659993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.660003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.660240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.660251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.660391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.660401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.660512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.660522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.660644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.660654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.660775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.660785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.660901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.660911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.661075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.661085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.661243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.661253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.661375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.661385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.661546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.661557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.661667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.661677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.661792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.661803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.662006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.662016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.662262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.662273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.662519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.662529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.662700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.662710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.662820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.662830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.663054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.663064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.663287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.663298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.663457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.663468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.663584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.663594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.663765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.663775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.664086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.664096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.664294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.664304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.664505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.664517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.664635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.664645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.664802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.664812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.265 [2024-07-15 17:08:50.664986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.265 [2024-07-15 17:08:50.664996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.265 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.665120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.665130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.665301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.665311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.665435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.665446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.665558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.665567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.665707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.665717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.665830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.665840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.665997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.666007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.666290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.666301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.666475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.666485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.666604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.666613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.666729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.666739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.666914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.666923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.667213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.667223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.667415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.667425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.667625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.667635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.667792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.667802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.667910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.667919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.668118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.668128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.668302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.668312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.668535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.668545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.668720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.668730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.668914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.668924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.669083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.669093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.669375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.669392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.669518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.669532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.669645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.669658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.669838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.669853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.669977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.669991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.670245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.670259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.670425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.670438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.670670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.670685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.670860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.670875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.671102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.671115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.671417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.671433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.671532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.671546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.671709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.671723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.672020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.672033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.672222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.672242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.672427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.672441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.672560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.672573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.266 [2024-07-15 17:08:50.672710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.266 [2024-07-15 17:08:50.672725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.266 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.672982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.672995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.673258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.673290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.673516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.673545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.673693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.673722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.673965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.673979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.674147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.674161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.674400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.674414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.674650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.674664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.674846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.674859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.675088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.675106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.675293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.675307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.675475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.675489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.675735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.675748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.675986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.676000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.676107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.676121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.676308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.676323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.676509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.676523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.676705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.676719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.676937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.676951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.677084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.677098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.677318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.677349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.677541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.677571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.677776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.677807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.677961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.677975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.678148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.678162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.678340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.678355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.678586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.678599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.678713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.678726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.678837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.678851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.679047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.679062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.679243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.679256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.679495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.679525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.679732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.267 [2024-07-15 17:08:50.679761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.267 qpair failed and we were unable to recover it. 00:26:44.267 [2024-07-15 17:08:50.679974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.680003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.680269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.680299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.680546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.680576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.681006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.681026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.681242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.681258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.681484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.681515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.681676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.681705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.681958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.681988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.682280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.682311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.682557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.682586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.682831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.682872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.683126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.683139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.683380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.683394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.683575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.683589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.683727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.683766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.684090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.684120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.684372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.684403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.684663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.684728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.684913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.684946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.685185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.685216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.685390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.685421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.685646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.685661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.685776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.685790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.686043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.686057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.686180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.686194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.686366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.686380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.686636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.686649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.686946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.686977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.687252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.687284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.687578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.687608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.687928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.687966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.688188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.688218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.688456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.688488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.688753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.688782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.689107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.689137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.689436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.689468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.689681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.689711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.689927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.689957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.690193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.690223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.690435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.690466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.268 [2024-07-15 17:08:50.690662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.268 [2024-07-15 17:08:50.690692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.268 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.690900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.690914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.691099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.691113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.691381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.691414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.691597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.691627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.691881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.691910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.692116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.692146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.692366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.692397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.692609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.692622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.692811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.692826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.693035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.693064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.693303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.693334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.693625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.693655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.693869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.693899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.694192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.694223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.694461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.694491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.694732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.694762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.695000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.695014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.695192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.695205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.695339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.695354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.695472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.695486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.695619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.695633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.695941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.695955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.696138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.696152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.696279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.696294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.696493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.696507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.696739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.696753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.697088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.697102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.697238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.697252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.697366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.697381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.697509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.697526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.697707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.697722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.697993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.698023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.698300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.698332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.698614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.698644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.698815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.698844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.699064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.699094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.699390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.699422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.699590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.699620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.699942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.699972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.700169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.269 [2024-07-15 17:08:50.700183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.269 qpair failed and we were unable to recover it. 00:26:44.269 [2024-07-15 17:08:50.700422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.700436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.700558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.700572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.700796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.700826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.700987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.701017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.701241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.701273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.701420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.701450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.701723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.701753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.702006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.702020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.702262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.702276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.702461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.702475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.702670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.702700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.702939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.702968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.703213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.703251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.703537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.703566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.703827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.703857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.704143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.704173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.704498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.704530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.704765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.704795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.705104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.705133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.705316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.705346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.705567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.705598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.705828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.705843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.706043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.706057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.706167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.706181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.706376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.706390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.706577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.706590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.706712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.706726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.706952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.706982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.707142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.707172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.707400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.707432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.707583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.707597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.707788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.707818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.708099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.708129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.708332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.708363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.708534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.708564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.708714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.708744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.709071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.709084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.709210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.709230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.709394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.709408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.709626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.709656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.270 qpair failed and we were unable to recover it. 00:26:44.270 [2024-07-15 17:08:50.709971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.270 [2024-07-15 17:08:50.710012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.710182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.710195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.710391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.710405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.710545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.710572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.710790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.710819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.711029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.711060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.711271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.711301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.711623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.711653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.711963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.711993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.712207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.712244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.712510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.712540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.712752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.712782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.712999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.713012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.713212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.713231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.713349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.713363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.713530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.713543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.713727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.713743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.714036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.714050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.714247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.714265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.714471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.714485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.714681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.714710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.715048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.715078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.715299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.715331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.715602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.715631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.715936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.715965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.716269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.716299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.716471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.716501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.716715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.716745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.717072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.717114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.717371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.717401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.717586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.717616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.717836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.717849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.718120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.718134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.718349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.718364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.271 [2024-07-15 17:08:50.718596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.271 [2024-07-15 17:08:50.718610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.271 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.718730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.718743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.718993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.719007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.719191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.719205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.719357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.719371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.719628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.719657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.719931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.719961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.720237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.720268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.720453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.720482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.720699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.720713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.720894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.720908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.721094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.721108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.721317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.721332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.721510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.721524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.721708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.721737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.722057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.722087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.722249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.722280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.722492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.722522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.722740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.722770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.723019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.723033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.723306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.723321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.723496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.723510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.723633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.723651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.723769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.723782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.723996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.724010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.724185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.724198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.724318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.724332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.724524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.724539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.724726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.724740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.725028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.725057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.725209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.725246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.725469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.725498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.725646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.725659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.725886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.725916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.726125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.726154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.726366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.726398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.726549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.726578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.726740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.726769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.726896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.726909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.727230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.727245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.727490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.272 [2024-07-15 17:08:50.727504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.272 qpair failed and we were unable to recover it. 00:26:44.272 [2024-07-15 17:08:50.727685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.727698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.727832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.727870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.728173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.728203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.728373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.728403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.728566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.728596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.728826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.728856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.729159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.729188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.729392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.729422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.729634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.729663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.729871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.729900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.730195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.730231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.730396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.730426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.730594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.730624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.730840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.730869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.731096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.731126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.731362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.731394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.731662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.731692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.731980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.731993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.732251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.732266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.732498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.732512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.732764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.732777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.733034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.733050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.733292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.733306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.733437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.733452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.733572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.733586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.733818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.733832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.734025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.734038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.734291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.734305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.734558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.734571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.734707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.734720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.734840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.734854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.735109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.735122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.735352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.735365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.735482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.735496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.735675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.735689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.735948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.735962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.736083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.736096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.736332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.736346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.736471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.736484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.736582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.736595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.273 qpair failed and we were unable to recover it. 00:26:44.273 [2024-07-15 17:08:50.736778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.273 [2024-07-15 17:08:50.736791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.736980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.737009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.737222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.737259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.737430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.737460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.737676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.737690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.737969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.737983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.738168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.738182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.738359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.738373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.738616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.738646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.738844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.738873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.739152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.739182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.739396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.739427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.739671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.739700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.739862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.739876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.740070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.740099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.740381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.740412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.740659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.740689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.740918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.740931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.741110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.741124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.741237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.741251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.741507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.741521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.741688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.741704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.741886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.741899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.742134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.742164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.742410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.742441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.742681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.742711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.742902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.742916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.743191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.743204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.743351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.743365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.743580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.743593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.743724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.743738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.743863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.743877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.744001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.744014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.744139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.744152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.744320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.744334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.744595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.744609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.744728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.744742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.744855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.744869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.744971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.744984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.745215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.745233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.745384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.274 [2024-07-15 17:08:50.745397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.274 qpair failed and we were unable to recover it. 00:26:44.274 [2024-07-15 17:08:50.745523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.745536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.745699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.745712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.745896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.745910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.746122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.746152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.746381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.746412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.746631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.746661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.746892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.746921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.747137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.747167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.747317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.747348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.747562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.747592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.747762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.747791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.748103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.748132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.748417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.748448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.748664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.748693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.748849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.748879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.749116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.749146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.749436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.749467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.749690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.749719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.749957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.749986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.750252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.750282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.750491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.750525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.750758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.750787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.751083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.751112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.751325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.751355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.751557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.751586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.751737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.751765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.752066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.752096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.752396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.752410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.752586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.752600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.752829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.752843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.753123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.753136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.753320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.753334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.753594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.753607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.753892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.753906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.754165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.754179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.754372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.754386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.754663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.754676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.754885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.754898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.755137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.755150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.755374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.755388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.275 qpair failed and we were unable to recover it. 00:26:44.275 [2024-07-15 17:08:50.755573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.275 [2024-07-15 17:08:50.755586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.755820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.755849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.756112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.756142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.756383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.756414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.756709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.756738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.757034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.757063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.757280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.757310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.757534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.757564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.757821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.757834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.757956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.757969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.758144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.758157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.758339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.758353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.758471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.758484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.758600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.758614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.758842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.758856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.759088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.759101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.759387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.759401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.759582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.759595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.759880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.759894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.760173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.760187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.760434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.760450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.760662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.760676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.760839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.760853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.761046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.761076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.761376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.761406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.761700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.761730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.762022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.762052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.762319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.762349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.762618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.762648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.762941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.762970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.763275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.763306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.763514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.763544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.763808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.763837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.764110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.764139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.764460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.276 [2024-07-15 17:08:50.764491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.276 qpair failed and we were unable to recover it. 00:26:44.276 [2024-07-15 17:08:50.764717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.764731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.764969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.764982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.765243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.765257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.765503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.765516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.765699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.765712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.765991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.766005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.766241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.766255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.766510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.766524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.766650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.766663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.766935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.766964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.767199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.767246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.767544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.767573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.767734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.767764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.768060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.768090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.768376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.768390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.768571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.768585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.768837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.768851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.769098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.769111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.769357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.769371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.769602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.769615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.769817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.769831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.769995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.770036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.770303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.770333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.770603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.770633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.770873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.770903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.771194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.771269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.771577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.771606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.771869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.771898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.772163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.772192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.772522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.772553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.772813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.772843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.773125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.773138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.773378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.773393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.773588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.773602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.773713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.773726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.773982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.774000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.774259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.774290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.774578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.774607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.774903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.774932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.775264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.775296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.775562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.277 [2024-07-15 17:08:50.775592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.277 qpair failed and we were unable to recover it. 00:26:44.277 [2024-07-15 17:08:50.775746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.775776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.776088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.776118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.776323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.776353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.776639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.776669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.776920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.776934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.777185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.777198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.777433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.777447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.777681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.777694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.777974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.777988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.778240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.778254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.778513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.778527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.778874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.778944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.779212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.779258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.779532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.779563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.779857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.779886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.780146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.780160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.780414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.780428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.780688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.780702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.780979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.780993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.781252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.781266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.781467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.781481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.781688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.781701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.781933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.781947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.782201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.782215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.782383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.782398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.782665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.782695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.782983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.783012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.783231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.783245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.783413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.783427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.783660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.783690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.784020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.784049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.784413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.784429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.784606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.784620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.784873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.784887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.785123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.785152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.785426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.785456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.785768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.785804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.786063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.786077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.786307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.786325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.786438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.786452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.786707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.278 [2024-07-15 17:08:50.786720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.278 qpair failed and we were unable to recover it. 00:26:44.278 [2024-07-15 17:08:50.786910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.786924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.787165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.787178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.787464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.787494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.787759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.787788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.788003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.788041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.788295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.788309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.788487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.788500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.788734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.788763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.789063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.789093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.789295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.789326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.789618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.789647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.789947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.789961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.790209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.790223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.790465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.790478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.790644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.790658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.790834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.790847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.791010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.791023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.791255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.791269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.791487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.791500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.791693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.791707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.791834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.791848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.791965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.791979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.792102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.792115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.792413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.792444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.792655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.792690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.792889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.792902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.793084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.793097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.793285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.793316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.793550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.793580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.793872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.793901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.794120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.794150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.794389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.794419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.794657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.794686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.794966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.794994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.795296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.795310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.795487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.795501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.795753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.795766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.796008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.796021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.796270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.796285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.796516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.796530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.796711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.279 [2024-07-15 17:08:50.796725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.279 qpair failed and we were unable to recover it. 00:26:44.279 [2024-07-15 17:08:50.796981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.797010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.797302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.797333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.797628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.797657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.797950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.797980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.798219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.798258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.798571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.798600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.798756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.798786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.799080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.799109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.799316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.799330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.799542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.799555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.799733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.799747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.799985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.799999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.800205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.800219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.800513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.800527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.800780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.800794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.800893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.800906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.801182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.801195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.801376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.801391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.801569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.801598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.801862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.801891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.802155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.802184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.802486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.802516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.802741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.802771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.803061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.803102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.803335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.803351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.803565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.803579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.803750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.803764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.804020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.804049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.804340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.804370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.804587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.804615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.804873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.804887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.805097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.805110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.805340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.805354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.805529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.805543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.805779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.805792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.280 qpair failed and we were unable to recover it. 00:26:44.280 [2024-07-15 17:08:50.806049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.280 [2024-07-15 17:08:50.806063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.806321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.806335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.806617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.806646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.806965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.806996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.807267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.807281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.807569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.807599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.807877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.807906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.808200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.808238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.808538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.808567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.808799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.808829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.809093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.809122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.809337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.809368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.809635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.809664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.809884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.809914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.810183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.810212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.810425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.810454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.810679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.810714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.811030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.811059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.811294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.811324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.811623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.811652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.811879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.811908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.812105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.812118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.812321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.812335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.812498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.812511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.812702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.812732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.813042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.813072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.813337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.813368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.813679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.813709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.813990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.814019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.814336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.814367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.814600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.814630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.814841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.814870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.815185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.815198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.815446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.815460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.815651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.815665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.815801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.815815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.816092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.816122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.816386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.816418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.816663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.816693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.816977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.816991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.817248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.817262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.281 [2024-07-15 17:08:50.817497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.281 [2024-07-15 17:08:50.817511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.281 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.817764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.817777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.818032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.818045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.818235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.818250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.818505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.818518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.818798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.818811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.818995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.819009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.819175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.819189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.819421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.819434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.819696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.819726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.819995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.820025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.820229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.820243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.820417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.820431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.820698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.820728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.820940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.820969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.821241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.821272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.821590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.821624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.821893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.821923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.822143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.822173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.822375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.822390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.822645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.822659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.822862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.822892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.823130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.823159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.823382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.823414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.823707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.823737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.823956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.823969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.824147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.824161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.824423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.824454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.824668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.824697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.824986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.824999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.825127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.825141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.825375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.825389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.825665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.825695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.825899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.825928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.826147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.826176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.826457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.826489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.826731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.826760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.827027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.827057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.827380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.827394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.827596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.827610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.827869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.282 [2024-07-15 17:08:50.827883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.282 qpair failed and we were unable to recover it. 00:26:44.282 [2024-07-15 17:08:50.828166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.828179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.828356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.828370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.828581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.828610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.828886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.828915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.829235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.829266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.829502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.829531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.829822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.829852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.830052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.830066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.830235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.830249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.830512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.830542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.830807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.830837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.831158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.831171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.831368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.831382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.831567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.831581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.831819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.831848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.832073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.832086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.832276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.832290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.832547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.832561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.832802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.832816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.833065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.833078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.833339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.833353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.833592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.833606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.833789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.833803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.834059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.834073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.834339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.834370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.834658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.834688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.834957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.834986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.835302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.835333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.835622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.835652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.835957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.835987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.836208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.836246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.836475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.836504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.836771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.836801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.837000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.837029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.837313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.837343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.837659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.837689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.837986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.838015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.838309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.838324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.838581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.838595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.838715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.838729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.838955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.838985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.283 qpair failed and we were unable to recover it. 00:26:44.283 [2024-07-15 17:08:50.839217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.283 [2024-07-15 17:08:50.839273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.839425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.839455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.839605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.839639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.839928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.839958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.840176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.840206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.840503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.840517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.840647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.840661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.840954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.840984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.841264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.841296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.841519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.841548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.841757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.841786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.842073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.842087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.842342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.842357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.842525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.842539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.842749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.842762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.843019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.843049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.843328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.843360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.843683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.843712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.843991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.844021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.844291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.844321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.844556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.844586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.844853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.844882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.845196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.845210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.845391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.845406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.845644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.845673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.845919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.845948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.846192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.846222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.846531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.846562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.846855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.846885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.847107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.847136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.847421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.847435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.847695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.847709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.847943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.847957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.848211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.848229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.848518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.848532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.284 [2024-07-15 17:08:50.848790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.284 [2024-07-15 17:08:50.848819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.284 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.849046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.849076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.849359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.849373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.849620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.849634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.849883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.849896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.850072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.850085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.850286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.850316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.850530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.850560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.850873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.850907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.851125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.851154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.851445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.851477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.851625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.851654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.851819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.851849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.852052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.852081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.852375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.852406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.852609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.852638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.852875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.852905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.853142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.853172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.853427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.853459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.853682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.853711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.853979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.854009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.854234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.854265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.854569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.854600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.854886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.854916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.855140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.855169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.855409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.855440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.855726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.855756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.856024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.856053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.856299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.856330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.856626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.856656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.856876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.856905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.857142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.857171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.857477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.857508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.857780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.857809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.858071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.858084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.858288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.858305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.858514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.858528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.858788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.858802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.858984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.858997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.859260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.859291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.859581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.859610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.859908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.285 [2024-07-15 17:08:50.859937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.285 qpair failed and we were unable to recover it. 00:26:44.285 [2024-07-15 17:08:50.860247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.860277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.860490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.860504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.860735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.860750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.860989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.861002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.861238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.861253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.861514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.861527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.861804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.861818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.862080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.862093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.862348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.862362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.862542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.862556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.862729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.862758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.862972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.863001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.863243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.863274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.863537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.863550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.863809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.863823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.864004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.864018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.864205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.864219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.864399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.864414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.864681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.864694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.864965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.864994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.865289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.865321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.865619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.865633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.865814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.865828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.866086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.866099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.866284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.866298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.866505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.866519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.866685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.866699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.866883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.866897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.867157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.867171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.867348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.867362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.867544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.867573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.867790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.867819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.868033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.868062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.868264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.868278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.868463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.868479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.868716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.868729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.868919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.868932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.869123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.869136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.869400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.869414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.869674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.869688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.869924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.286 [2024-07-15 17:08:50.869937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.286 qpair failed and we were unable to recover it. 00:26:44.286 [2024-07-15 17:08:50.870117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.870132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.870389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.870404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.870664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.870678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.870962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.870976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.871239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.871253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.871488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.871502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.871689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.871703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.871819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.871833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.872005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.872019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.872187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.872201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.872430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.872461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.872752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.872781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.873097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.873127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.873283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.873314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.873614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.873643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.873934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.873963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.874261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.874291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.874583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.874596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.874792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.874805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.874919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.874932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.875177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.875194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.875452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.875467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.875675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.875689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.875877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.875891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.876069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.876082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.876288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.876302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.876506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.876519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.876784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.876798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.877006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.877020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.877298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.877312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.877577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.877591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.877771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.877784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.878064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.878078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.878193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.878206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.878463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.878477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.878729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.878742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.878933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.878947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.879123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.879137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.879374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.879388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.879651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.879665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.879794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.879808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.287 [2024-07-15 17:08:50.879946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.287 [2024-07-15 17:08:50.879961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.287 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.880130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.880143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.880390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.880404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.880604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.880618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.880822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.880835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.881074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.881088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.881298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.881313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.881606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.881620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.881883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.881897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.882134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.882148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.882399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.882413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.882676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.882690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.882859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.882872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.883126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.883140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.883399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.883413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.883649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.883663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.883843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.883857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.884122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.884136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.884414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.884428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.884610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.884624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.884859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.884876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.885074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.885089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.885238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.885252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.885496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.885509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.885728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.885741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.886018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.886032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.886215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.886233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.886374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.886388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.886624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.886639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.886750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.886764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.887028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.887042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.887242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.887257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.887542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.887556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.887791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.887805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.888000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.888014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.888202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.888215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.888385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.888399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.888635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-15 17:08:50.888649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.288 qpair failed and we were unable to recover it. 00:26:44.288 [2024-07-15 17:08:50.888840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.888855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.889045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.889059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.889249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.889263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.889451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.889464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.889725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.889739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.890041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.890055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.890317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.890331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.890582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.890596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.890826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.890839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.891102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.891118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.891377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.891392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.891626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.891639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.891886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.891900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.892078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.892092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.892279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.892293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.892546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.892560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.892775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.892788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.892896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.892910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.893200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.893213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.893427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.893441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.893612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.893626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.893879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.893892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.894101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.894115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.894330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.894344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.894534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.894548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.894755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.894769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.894975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.894988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.895205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.895219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.895458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.895472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.895726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.895739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.895999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.896012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.896198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.896212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.896421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.896436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.289 [2024-07-15 17:08:50.896669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-15 17:08:50.896682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.289 qpair failed and we were unable to recover it. 00:26:44.290 [2024-07-15 17:08:50.896917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.290 [2024-07-15 17:08:50.896931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.290 qpair failed and we were unable to recover it. 00:26:44.290 [2024-07-15 17:08:50.897122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.290 [2024-07-15 17:08:50.897137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.290 qpair failed and we were unable to recover it. 00:26:44.290 [2024-07-15 17:08:50.897396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.290 [2024-07-15 17:08:50.897410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.290 qpair failed and we were unable to recover it. 00:26:44.290 [2024-07-15 17:08:50.897670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.290 [2024-07-15 17:08:50.897684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.290 qpair failed and we were unable to recover it. 00:26:44.290 [2024-07-15 17:08:50.897917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.290 [2024-07-15 17:08:50.897930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.290 qpair failed and we were unable to recover it. 00:26:44.290 [2024-07-15 17:08:50.898099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.290 [2024-07-15 17:08:50.898113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.290 qpair failed and we were unable to recover it. 00:26:44.290 [2024-07-15 17:08:50.898370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.290 [2024-07-15 17:08:50.898384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.290 qpair failed and we were unable to recover it. 00:26:44.290 [2024-07-15 17:08:50.898631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.290 [2024-07-15 17:08:50.898645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.290 qpair failed and we were unable to recover it. 00:26:44.290 [2024-07-15 17:08:50.898880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.898893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.899156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.899172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.899379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.899394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.899597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.899615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.899813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.899827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.900084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.900098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.900335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.900349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.900608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.900622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.900805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.900822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.901009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.901022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.901208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.901222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.901459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.901473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.901736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.901750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.901919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.901932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.902133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.902146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.902351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.902365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.902562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.902576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.902763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.902776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.902893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.902907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.903124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.903137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.903305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.903319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.903494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.903508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.903716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.903731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.903965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.903979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.904163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.904178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.904493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.904507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.904692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.904706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.904905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.904918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.905145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.905159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.905392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.905407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.905618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.905632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.905914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.905927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.906196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.906210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.906463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.906477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.906721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.906735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.906940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.906953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.907150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.907164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.907398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.907413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.907652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.907666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.907845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.907858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.908056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.908070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.908236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.908250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.908432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.908445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.908704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.908717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.909019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.909033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.909164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.909177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.909443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.909457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.909639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.909652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.909903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.909916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.910125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.910153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.910400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.910412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.910644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.910654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.910822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.910832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.911080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.571 [2024-07-15 17:08:50.911090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.571 qpair failed and we were unable to recover it. 00:26:44.571 [2024-07-15 17:08:50.911312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.911323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.911600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.911610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.911858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.911868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.911987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.911997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.912220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.912234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.912424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.912434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.912690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.912700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.912869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.912879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.913126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.913142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.913311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.913321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.913507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.913517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.913770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.913780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.913948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.913958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.914152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.914163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.914390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.914401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.914575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.914585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.914754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.914765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.915016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.915026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.915277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.915287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.915558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.915568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.915738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.915748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.915996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.916007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.916260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.916270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.916510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.916520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.916746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.916756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.917027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.917037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.917202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.917212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.917333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.917344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.917463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.917473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.917578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.917588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.917815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.917825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.917918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.917927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.918176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.918186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.918375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.918386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.918570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.918580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.918836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.918847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.919094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.919104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.919279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.919290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.919550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.919561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.919808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.919818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.920069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.920080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.920327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.920338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.920566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.920576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.920808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.920819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.921091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.921101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.921346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.921357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.921610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.921621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.921798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.921808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.922053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.922065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.922313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.922324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.922522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.922532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.922778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.922789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.923031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.923040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.923219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.923235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.923410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.923420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.923645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.923659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.923826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.923836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.924004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.924014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.924180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.924190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.924413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.924424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.924597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.924607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.924771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.924781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.924986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.924996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.925115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.925125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.925283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.925293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.925463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.925473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.925704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.925714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.925871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.925881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.926054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.926064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.926305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.926316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.926485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.926495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.926726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.926736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.926911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.926921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.927166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.927176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.927453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.572 [2024-07-15 17:08:50.927463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.572 qpair failed and we were unable to recover it. 00:26:44.572 [2024-07-15 17:08:50.927709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.927721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.927878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.927888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.928113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.928122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.928364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.928375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.928467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.928476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.928723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.928733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.928909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.928920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.929117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.929127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.929285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.929295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.929549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.929559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.929809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.929821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.929918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.929928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.930152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.930162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.930268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.930279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.930467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.930478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.930652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.930662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.930922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.930932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.931153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.931163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.931354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.931365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.931587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.931598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.931795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.931805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.931918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.931928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.932129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.932138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.932239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.932252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.932413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.932424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.932546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.932556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.932720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.932730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.932922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.932932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.933123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.933133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.933323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.933334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.933610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.933620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.933823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.933833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.934062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.934071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.934232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.934242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.934419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.934428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.934607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.934617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.934804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.934814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.935066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.935077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.935243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.935254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.935477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.935486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.935648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.935660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.935915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.935925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.936173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.936183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.936349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.936360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.936626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.936636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.936861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.936871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.937040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.937050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.937300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.937310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.937584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.937594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.937823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.937832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.938073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.938083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.938243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.938253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.938508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.938537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.938846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.938875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.939165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.939194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.939426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.939458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.939671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.939700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.939935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.939963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.940188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.940198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.940445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.940455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.940698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.940708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.940973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.940983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.941217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.941231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.941385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.941395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.941512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.941522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.573 qpair failed and we were unable to recover it. 00:26:44.573 [2024-07-15 17:08:50.941626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.573 [2024-07-15 17:08:50.941635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.941793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.941803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.942052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.942062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.942261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.942271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.942474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.942503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.942819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.942848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.943074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.943103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.943393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.943431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.943666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.943676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.943854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.943864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.944050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.944079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.944352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.944383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.944653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.944682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.944815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.944844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.945133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.945162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.945303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.945315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.945519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.945529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.945683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.945693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.945944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.945954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.946231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.946241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.946463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.946473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.946714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.946724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.946881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.946890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.947048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.947058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.947364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.947394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.947590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.947620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.947865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.947895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.948180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.948210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.948539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.948569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.948837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.948866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.949134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.949171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.949423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.949433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.949674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.949684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.949881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.949891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.950000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.950010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.950239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.950249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.950360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.950370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.950569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.950579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.950693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.950702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.950874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.950884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.951041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.951051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.951282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.951292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.951464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.951502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.951819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.951848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.952065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.952095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.952386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.952417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.952587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.952616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.952902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.952931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.953198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.953237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.953477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.953506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.953771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.953800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.954037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.954067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.954353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.954363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.954523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.954533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.954776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.954786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.954987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.954999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.955243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.955253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.955502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.955512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.955619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.955629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.955782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.955792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.956018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.956028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.956128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.956138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.956316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.956326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.956480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.956490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.956734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.956744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.956979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.957008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.957302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.957332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.957624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.957653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.957947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.957977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.958283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.958314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.958624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.958653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.958942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.958971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.959262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.959293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.959592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.959621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.959920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.959950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.960245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.960276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.960513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.960522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.960768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.960779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.574 [2024-07-15 17:08:50.960931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-15 17:08:50.960941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.574 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.961214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.961255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.961453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.961483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.961725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.961754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.962049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.962078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.962357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.962368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.962616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.962626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.962909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.962937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.963236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.963267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.963558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.963587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.963808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.963837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.964044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.964073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.964293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.964303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.964526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.964536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.964713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.964723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.964971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.964981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.965213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.965251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.965453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.965487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.965750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.965779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.966058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.966088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.966340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.966350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.966515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.966525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.966697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.966707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.966968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.966996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.967287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.967318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.967607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.967636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.967929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.967959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.968245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.968276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.968571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.968601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.968815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.968844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.969083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.969112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.969413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.969444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.969710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.969739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.970003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.970032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.970344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.970374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.970594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.970623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.970841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.970871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.971017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.971046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.971332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.971362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.971680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.971708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.971942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.971972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.972271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.972281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.972525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.972535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.972774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.972783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.972953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.972963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.973208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.973218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.973408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.973418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.973686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.973716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.973913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.973943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.974250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.974281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.974489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.974517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.974804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.974832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.975126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.975156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.975445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.975475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.975768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.975798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.976093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.976123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.976325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.976355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.976642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.976654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.976810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.976820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.976991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.977001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.977251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.977281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.977427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.977455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.977693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.977723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.978021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.978050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.978365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.978396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.978670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.978680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.978932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.978941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.979161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.979171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.979366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.979376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.979565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.979575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.979679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.979689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.575 [2024-07-15 17:08:50.979938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-15 17:08:50.979948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.575 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.980046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.980056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.980298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.980341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.980629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.980659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.980953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.980982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.981248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.981278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.981435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.981475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.981719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.981729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.981885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.981895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.982049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.982059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.982161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.982171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.982422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.982432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.982709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.982739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.983071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.983100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.983298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.983329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.983556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.983586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.983874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.983904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.984212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.984251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.984514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.984542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.984847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.984857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.985104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.985115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.985281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.985291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.985483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.985513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.985815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.985844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.986053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.986082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.986291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.986322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.986528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.986562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.986782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.986792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.987034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.987044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.987285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.987296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.987403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.987413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.987519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.987529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.987771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.987781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.987956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.987966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.988196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.988234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.988475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.988504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.988641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.988671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.988873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.988883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.989127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.989137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.989320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.989331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.989567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.989597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.989818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.989847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.990057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.990086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.990374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.990406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.990705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.990734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.991029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.991058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.991315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.991326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.991612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.991622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.991777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.991787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.991976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.992006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.992285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.992317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.992544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.992574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.992787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.992816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.993146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.993211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.993526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.993565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.993850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.993863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.994043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.994056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.994244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.994275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.994566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.994595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.994833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.994863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.995138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.995168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.995455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.995485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.995781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.995810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.996102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.996132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.996424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.996438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.996699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.996713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.996961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.996974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.997159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.997172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.997431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.997445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.997693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.997707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.997944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.997958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.998192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.998205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.998498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.998512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.998790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.998804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.999030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.999043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.999283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.999296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.999526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.999540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.999668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.999682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:50.999864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-15 17:08:50.999897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.576 qpair failed and we were unable to recover it. 00:26:44.576 [2024-07-15 17:08:51.000110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.000140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.000360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.000396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.000608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.000622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.000881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.000894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.001150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.001163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.001396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.001410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.001666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.001681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.001886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.001900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.002161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.002175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.002315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.002329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.002588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.002617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.002909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.002938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.003101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.003131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.003392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.003406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.003597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.003610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.003878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.003892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.004178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.004191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.004373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.004388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.004566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.004595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.004859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.004889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.005046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.005076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.005389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.005419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.005700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.005730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.005973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.006002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.006293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.006323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.006589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.006618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.006927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.006940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.007108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.007122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.007325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.007365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.007596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.007625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.007917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.007946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.008216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.008264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.008504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.008533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.008798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.008808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.009000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.009010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.009126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.009136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.009453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.009484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.009753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.009782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.010094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.010123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.010406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.010416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.010681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.010691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.010931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.010942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.011117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.011127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.011352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.011362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.011481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.011491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.011662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.011672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.011857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.011886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.012096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.012125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.012412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.012450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.012612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.012622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.012812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.012841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.013070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.013100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.013299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.013330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.013586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.013596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.013839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.013849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.014107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.014122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.014356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.014370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.014621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.014635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.014868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.014881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.015137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.015150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.015352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.015365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.015649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.015662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.015864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.015878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.016137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.016151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.016407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.016421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.016649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.016662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.016919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.016932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.017172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.017186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.017371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.017385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.017576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.017589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.017833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.017862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.018127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.018156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.018472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.018502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.018791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.018821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.019119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.019148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.019364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.019395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.019679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.019692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.019925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.019939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.577 [2024-07-15 17:08:51.020117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.577 [2024-07-15 17:08:51.020131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.577 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.020360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.020374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.020604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.020617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.020851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.020865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.021146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.021159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.021382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.021392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.021566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.021576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.021802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.021812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.022060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.022089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.022305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.022336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.022627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.022656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.022893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.022922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.023142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.023172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.023480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.023511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.023811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.023841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.024103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.024132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.024398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.024429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.024740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.024769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.025053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.025083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.025326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.025356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.025644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.025674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.025963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.025993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.026290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.026321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.026562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.026572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.026746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.026756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.026999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.027010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.027179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.027207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.027443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.027473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.027744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.027773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.028090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.028119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.028402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.028432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.028652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.028681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.028944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.028973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.029264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.029295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.029503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.029532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.029694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.029724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.030012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.030041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.030256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.030286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.030564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.030603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.030841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.030851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.031105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.031115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.031273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.031284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.031554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.031584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.031819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.031848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.032131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.032165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.032457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.032487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.032686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.032715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.032932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.032961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.033161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.033190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.033416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.033446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.033701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.033731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.034018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.034047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.034283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.034314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.034534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.034564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.034844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.034854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.035087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.035097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.035214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.035223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.035484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.035514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.035808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.035837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.036130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.036160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.036397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.036428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.036695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.036704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.036879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.036889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.037153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.037163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.037316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.037326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.037588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.037617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.037943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.037972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.038262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.038293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.038534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.038563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.038853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.038883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.039147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.039176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.578 [2024-07-15 17:08:51.039495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.578 [2024-07-15 17:08:51.039526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.578 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.039744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.039755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.040013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.040023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.040259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.040269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.040429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.040440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.040614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.040624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.040814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.040824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.041097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.041107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.041300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.041310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.041551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.041561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.041800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.041810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.042084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.042094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.042333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.042344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.042561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.042573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.042748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.042758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.042941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.042970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.043261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.043291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.043595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.043605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.043896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.043926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.044243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.044273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.044575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.044604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.044895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.044924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.045189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.045218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.045540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.045550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.045793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.045803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.046032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.046042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.046233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.046243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.046486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.046495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.046664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.046674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.046989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.047018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.047313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.047345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.047565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.047595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.047776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.047786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.047895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.047905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.048166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.048176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.048406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.048416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.048608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.048618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.048870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.048899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.049186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.049214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.049446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.049475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.049832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.049899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.050211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.050259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.050553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.050583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.050887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.050917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.051153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.051183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.051434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.051465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.051714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.051744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.052031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.052060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.052334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.052366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.052578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.052592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.052838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.052851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.053146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.053160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.053335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.053349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.053598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.053619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.053872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.053885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.054119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.054133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.054306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.054320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.054506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.054535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.054763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.054792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.055054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.055084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.055350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.055380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.055596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.055625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.055885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.055914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.056216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.056257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.056540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.056569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.056735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.056764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.057051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.057081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.057353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.057384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.057688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.057702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.057938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.057951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.058188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.058201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.058414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.058428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.058719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.058733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.058987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.059001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.059207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.059221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.059405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.059420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.059701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.059730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.060002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.060031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.060345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.579 [2024-07-15 17:08:51.060376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.579 qpair failed and we were unable to recover it. 00:26:44.579 [2024-07-15 17:08:51.060589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.060603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.060935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.061014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.061305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.061340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.061631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.061661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.061924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.061953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.062241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.062273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.062483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.062514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.062818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.062832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.063086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.063099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.063378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.063390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.063647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.063658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.063924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.063934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.064178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.064188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.064371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.064382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.064560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.064571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.064696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.064707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.064959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.064970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.065146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.065157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.065397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.065428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.065704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.065734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.066029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.066039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.066262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.066272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.066399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.066409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.066603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.066613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.066883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.066894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.067133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.067144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.067258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.067269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.067465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.067476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.067655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.067666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.067773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.067783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.068035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.068046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.068205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.068215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.068479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.068489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.068666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.068675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.068883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.068893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.069079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.069089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.069209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.069219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.069349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.069360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.069541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.069551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.069673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.069683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.069956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.069988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.070268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.070305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.070599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.070629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.070847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.070877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.071116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.071146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.071377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.071408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.071684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.071714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.071956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.071985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.072280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.072311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.072527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.072556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.072757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.072786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.073071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.073101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.073418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.073449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.073602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.073632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.073910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.073921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.074150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.074161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.074332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.074342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.074596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.074625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.074837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.074866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.075158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.075187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.075431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.075462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.075801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.075831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.076110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.076139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.076346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.076377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.076579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.076590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.076743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.076753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.076973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.077002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.077167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.580 [2024-07-15 17:08:51.077197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.580 qpair failed and we were unable to recover it. 00:26:44.580 [2024-07-15 17:08:51.077527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.077558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.077816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.077845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.078060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.078089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.078325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.078356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.078638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.078648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.078807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.078817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.078989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.079032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.079251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.079281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.079491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.079520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.079783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.079793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.079953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.079963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.080161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.080191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.080498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.080528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.080816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.080851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.081143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.081172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.081455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.081487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.081750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.081779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.082026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.082036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.082282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.082293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.082478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.082488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.082682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.082693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.082903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.082933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.083154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.083184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.083487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.083517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.083717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.083746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.083914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.083944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.084247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.084278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.084575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.084604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.084800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.084810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.084938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.084948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.085196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.085258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.085567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.085596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.085882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.085911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.086237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.086267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.086558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.086588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.086876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.086906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.087199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.087237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.087535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.087565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.087845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.087855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.088102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.088112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.088281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.088292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.088549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.088579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.088793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.088822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.088981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.089011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.089155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.089185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.089493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.089525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.089682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.089712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.089837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.089847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.089962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.089972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.090247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.090258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.090425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.090435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.090609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.090620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.090778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.090789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.090943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.090955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.091079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.091088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.091205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.091216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.091381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.091391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.091493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.091503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.091626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.091636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.091793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.091803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.091903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.091916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.092139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.092149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.092287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.092298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.092453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.092463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.092575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.092585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.092763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.092774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.092987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.093017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.093222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.093261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.093463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.093492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.093713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.093742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.093951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.093980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.094182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.094212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.094434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.094464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.094659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.094670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.094805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.094815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.094911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.094921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.095090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.581 [2024-07-15 17:08:51.095100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.581 qpair failed and we were unable to recover it. 00:26:44.581 [2024-07-15 17:08:51.095289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.095300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.095388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.095397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.095569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.095598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.095837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.095868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.096012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.096041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.096264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.096295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.096568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.096598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.096740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.096770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.096963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.096973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.097079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.097090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.097314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.097324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.097492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.097503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.097749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.097760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.097927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.097937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.098044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.098054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.098155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.098165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.098324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.098337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.098495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.098505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.098606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.098619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.098794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.098805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.098905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.098915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.099016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.099026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.099203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.099213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.099399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.099410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.099522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.099532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.099689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.099699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.099805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.099816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.099918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.099928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.100176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.100186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.100308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.100319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.100479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.100490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.100650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.100678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.100891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.100920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.101191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.101220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.101391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.101422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.101640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.101668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.101823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.101834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.102068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.102098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.102299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.102330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.102505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.102534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.102671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.102682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.102928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.102938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.103119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.103130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.103277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.103311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.103468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.103502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.103768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.103836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.104001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.104035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.104201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.104250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.104405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.104435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.104640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.104670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.104805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.104834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.105030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.105043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.105212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.105232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.105342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.105356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.105541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.105554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.105733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.105762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.106000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.106037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.106191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.106221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.106450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.106480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.106754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.106767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.106944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.106957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.107125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.107138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.107312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.107326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.107574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.107587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.107711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.107725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.107838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.107851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.108030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.108043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.108155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.108169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.108367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.108382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.108634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.108648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.108757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.108771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.582 [2024-07-15 17:08:51.108878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.582 [2024-07-15 17:08:51.108891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.582 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.109073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.109087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.109274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.109304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.109467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.109497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.109701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.109731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.109884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.109913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.110120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.110134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.110249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.110264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.110440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.110454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.110663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.110676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.110796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.110810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.110932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.110946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.111141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.111159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.111333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.111348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.111457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.111471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.111582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.111595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.111841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.111855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.112021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.112034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.112222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.112261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.112414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.112445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.112644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.112673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.112888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.112918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.113134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.113164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.113378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.113409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.113567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.113597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.113869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.113914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.114083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.114096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.114270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.114284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.114407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.114421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.114667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.114680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.114857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.114871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.115119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.115133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.115324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.115355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.115629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.115658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.115865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.115895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.116033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.116046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.116303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.116317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.116481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.116522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.116813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.116842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.117098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.117127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.117419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.117450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.117740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.117770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.117999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.118028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.118296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.118327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.118555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.118584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.118874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.118904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.119126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.119155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.119391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.119421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.119616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.119630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.119798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.119811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.120010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.120039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.120310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.120353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.120666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.120708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.121070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.121100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.121251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.121282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.121584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.121614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.121876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.121905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.122190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.122221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.122395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.122425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.122688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.122718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.122995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.123024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.123291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.123321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.123531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.123560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.123707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.123736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.123892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.123922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.124139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.124168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.124337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.124367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.124634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.124663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.124878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.124908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.125123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.125153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.125396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.125427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.125684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.125698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.125881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.125894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.126086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.126100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.126310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.126323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.126444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.126458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.126625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.126639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.126933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.126946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.583 [2024-07-15 17:08:51.127205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.583 [2024-07-15 17:08:51.127219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.583 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.127451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.127467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.127633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.127647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.127901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.127915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.128095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.128109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.128296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.128309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.128494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.128507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.128671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.128685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.128916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.128930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.129052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.129066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.129235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.129249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.129419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.129432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.129595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.129608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.129781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.129795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.129978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.129992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.130254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.130269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.130458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.130471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.130764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.130777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.130950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.130963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.131243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.131257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.131435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.131449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.131688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.131702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.131895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.131908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.132100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.132113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.132296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.132310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.132472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.132486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.132657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.132670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.132832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.132846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.133028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.133042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.133232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.133246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.133489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.133502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.133732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.133746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.133935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.133948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.134198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.134211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.134395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.134409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.134638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.134652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.134773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.134786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.135045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.135059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.135298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.135312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.135486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.135500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.135757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.135771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.136043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.136057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.136344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.136383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.136624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.136643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.136878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.136894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.137077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.137091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.137296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.137311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.137492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.137514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.137710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.137732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.138034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.138052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.138330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.138345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.138578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.138593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.138776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.138797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.139116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.139143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.139329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.139342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.139578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.139592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.139788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.139799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.139972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.139982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.140233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.140244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.140491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.140501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.140725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.140736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.140966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.140976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.141212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.141222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.141492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.141502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.141659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.141670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.141907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.141916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.142148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.142158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.142361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.142372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.142563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.142573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.142733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.142743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.142897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.142907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.143078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.143088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.584 [2024-07-15 17:08:51.143247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.584 [2024-07-15 17:08:51.143258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.584 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.143442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.143452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.143632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.143641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.143843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.143853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.144097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.144107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.144344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.144354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.144624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.144634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.144756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.144766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.144959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.144969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.145128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.145138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.145334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.145344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.145472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.145482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.145637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.145647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.145919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.145929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.146042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.146052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.146227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.146238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.146472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.146482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.146641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.146651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.146832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.146841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.147023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.147033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.147135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.147145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.147310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.147320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.147593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.147603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.147851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.147861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.148107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.148117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.148280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.148290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.148540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.148550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.148812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.148822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.149067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.149077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.149243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.149253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.149431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.149441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.149666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.149676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.149845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.149855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.150096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.150106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.150263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.150273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.150384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.150394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.150618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.150628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.150872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.150882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.151058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.151068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.151261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.151272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.151522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.151532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.151713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.151722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.151847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.151858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.152048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.152058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.152309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.152319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.152493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.152503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.152727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.152737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.152967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.152977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.153220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.153233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.153502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.153512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.153671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.153682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.153933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.153943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.154188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.154198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.154445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.154455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.154633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.154643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.154821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.154832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.155100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.155110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.155282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.155292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.155546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.155556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.155724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.155734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.155923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.155933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.156133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.156143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.156305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.156316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.156479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.156489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.156663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.156673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.156828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.156838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.157004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.157014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.157236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.157246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.157424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.157434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.157542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.157552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.157718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.157728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.157949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.157959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.158133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.158144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.158317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.158327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.158498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.158509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.158610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.158620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.158786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.158796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.159048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.159058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.585 [2024-07-15 17:08:51.159210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.585 [2024-07-15 17:08:51.159220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.585 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.159391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.159401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.159627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.159638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.159818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.159828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.160072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.160082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.160296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.160307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.160484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.160494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.160729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.160738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.160891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.160901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.161129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.161139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.161262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.161273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.161544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.161553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.161782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.161793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.162048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.162059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.162312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.162322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.162561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.162571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.162794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.162804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.162959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.162969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.163162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.163172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.163440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.163450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.163679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.163689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.163802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.163812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.163989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.163999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.164241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.164252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.164360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.164370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.164583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.164593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.164819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.164829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.165000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.165010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.165267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.165277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.165448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.165458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.165650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.165660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.165813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.165823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.166052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.166062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.166248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.166259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.166480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.166490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.166761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.166771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.166993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.167003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.167207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.167218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.167392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.167403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.167648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.167658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.167904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.167914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.168074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.168084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.168326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.168337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.168563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.168573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.168743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.168752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.168924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.168934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.169204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.169214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.169447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.169457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.169701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.169711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.169982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.169992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.170203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.170213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.170469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.170479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.170754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.170766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.171035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.171045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.171249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.171259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.171481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.171491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.171717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.171727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.171974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.171985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.172162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.172172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.172414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.172425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.172674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.172683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.172934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.172944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.173188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.173197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.173366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.173376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.173475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.173485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.173599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.173609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.173786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.173796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.173960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.173970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.174238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.174248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.174348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.174358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.174530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.174540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.174715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.174725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.174894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.174904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.175080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.175090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.175312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.175323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.175549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.175560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.175753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.175763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.175993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.176002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.176250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.586 [2024-07-15 17:08:51.176261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.586 qpair failed and we were unable to recover it. 00:26:44.586 [2024-07-15 17:08:51.176522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.176532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.176764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.176773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.176933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.176943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.177178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.177188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.177436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.177447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.177666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.177676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.177941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.177951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.178134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.178144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.178363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.178374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.178636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.178646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.178833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.178843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.179010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.179021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.179211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.179221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 238224 Killed "${NVMF_APP[@]}" "$@" 00:26:44.587 [2024-07-15 17:08:51.179480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.179491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.179676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.179686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.179808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.179818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:44.587 [2024-07-15 17:08:51.179997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.180008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.180252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.180263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:44.587 [2024-07-15 17:08:51.180517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.180527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:44.587 [2024-07-15 17:08:51.180695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.180705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.180882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.180893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.181113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.181124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:44.587 [2024-07-15 17:08:51.181293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.181304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.181559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.181569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.181664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.181674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.181935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.181946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.182199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.182209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.182468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.182479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.182716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.182726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.182977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.182987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.183156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.183165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.183390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.183401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.183490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.183499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.183724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.183734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.183976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.183987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.184152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.184162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.184405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.184416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.184628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.184641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.184923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.184934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.185137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.185150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.185309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.185320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.185476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.185485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.185660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.185669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.185934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.185944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.186145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.186154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.186339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.186349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.186524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.186534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.186774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.186784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.187051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.187061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.187240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.187250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.187429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.187439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.187639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.187669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=239046 00:26:44.587 [2024-07-15 17:08:51.187920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.187951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 239046 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:44.587 [2024-07-15 17:08:51.188236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.188269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.188488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 239046 ']' 00:26:44.587 [2024-07-15 17:08:51.188519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.188730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.188764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.587 [2024-07-15 17:08:51.189009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.189039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:44.587 [2024-07-15 17:08:51.189188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.189199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.189428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.587 [2024-07-15 17:08:51.189462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.189612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.189642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:44.587 [2024-07-15 17:08:51.189849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.189882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 17:08:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:44.587 [2024-07-15 17:08:51.190159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.190172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.190347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.190357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.190549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.190560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.587 [2024-07-15 17:08:51.190737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.587 [2024-07-15 17:08:51.190766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.587 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.191074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.191104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.191324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.191355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.191685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.191715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.191906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.191917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.192115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.192145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.192318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.192349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.192614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.192644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.192865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.192895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.193096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.193127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.193319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.193330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.193456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.193466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.193637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.193647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.193801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.193812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.193984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.193994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.194360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.194371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.194534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.194544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.194765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.194775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.195021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.195031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.195286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.195297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.195498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.195509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.195678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.195689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.195951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.195985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.196271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.196303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.196524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.196553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.196819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.196848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.197012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.197041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.197301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.197312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.197558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.197568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.197677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.197687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.197944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.197954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.198067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.198077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.198319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.198329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.198498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.198508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.198735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.198746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.199027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.199037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.199296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.199307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.199539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.199550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.199771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.199782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.199905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.199915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.200101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.200111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.200332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.200343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.200521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.200532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.200762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.200792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.201073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.201085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.201196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.201206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.201440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.201451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.201617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.201627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.201902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.201931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.202220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.202304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.202580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.202613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.202834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.202848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.203017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.203032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.203248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.203280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.203571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.203601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.203906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.203935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.204238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.204269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.204535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.204565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.204904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.204934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.205196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.205213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.205479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.205494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.205735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.205751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.205929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.205942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.206183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.206214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.206461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.206493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.206732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.206763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.207036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.207066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.207263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.207279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.207526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.207539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.207708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.207723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.207833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.207847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.208075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.208089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.208258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.208272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.208456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.208469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.208734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.208764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.208999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.209029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.209243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.209260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.209524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.209553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.588 [2024-07-15 17:08:51.209771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.588 [2024-07-15 17:08:51.209800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.588 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.210091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.210120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.210395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.210425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.210693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.210722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.210867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.210896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.211165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.211179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.211372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.211402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.211567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.211596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.211870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.211900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.212099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.212129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.212333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.212364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.212582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.212612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.212773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.212803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.213101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.213115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.213290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.213304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.213425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.213438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.213616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.213630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.213794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.213808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.213938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.213952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.214077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.214091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.214266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.214280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.214466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.214480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.214662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.214676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.214795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.214810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.214978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.214993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.215082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.215103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.215278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.215292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.215481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.215495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.215606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.215619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.215849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.215864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.215996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.216010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.216114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.216128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.589 [2024-07-15 17:08:51.216230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.589 [2024-07-15 17:08:51.216263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.589 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.216515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.216531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.216644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.216659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.216929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.216960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.217164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.217193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.217485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.217516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.217796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.217825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.218029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.218044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.218169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.218183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.218277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.218291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.218462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.218476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.218574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.218587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.218700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.218713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.218823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.218836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.218966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.218979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.219161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.219199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.219444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.219474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.219628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.219657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.219810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.219840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.868 [2024-07-15 17:08:51.219974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.868 [2024-07-15 17:08:51.220007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.868 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.220304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.220336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.220493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.220527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.220741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.220771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.220934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.220963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.221114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.221128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.221299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.221333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.221637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.221668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.221821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.221851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.222044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.222058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.222162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.222176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.222301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.222315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.222422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.222436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.222638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.222652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.222904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.222918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.223038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.223055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.223219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.223238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.223430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.223444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.223699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.223713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.223831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.223845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.224041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.224054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.224257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.224271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.224442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.224456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.224724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.224753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.224963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.224992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.225219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.225258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.225411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.225441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.225652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.225681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.225875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.225889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.226067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.226081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.226266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.226297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.226584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.226614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.226841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.226870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.227118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.227132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.227338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.227353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.227522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.227551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.227773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.227801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.227958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.227987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.228130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.228169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.228463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.228477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.228707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.869 [2024-07-15 17:08:51.228721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.869 qpair failed and we were unable to recover it. 00:26:44.869 [2024-07-15 17:08:51.228862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.228876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.229106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.229119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.229305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.229320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.229473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.229487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.229734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.229748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.229879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.229897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.230073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.230087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.230371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.230385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.230642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.230655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.230822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.230836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.230960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.230974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.231095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.231109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.231291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.231305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.231405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.231419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.231501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.231514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.231617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.231651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.231778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.231793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.231956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.231970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.232091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.232104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.232217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.232237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.232401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.232415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.232609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.232623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.232736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.232750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.232861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.232875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.232981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.232995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.233171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.233185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.233385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.233400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.233524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.233537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.233675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.233694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.233929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.233943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.234055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.234068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.234236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.234250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.234428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.234442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.234561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.234574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.234752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.234765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.234874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.234888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.235049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.235063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.235252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.235266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.235343] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:26:44.870 [2024-07-15 17:08:51.235390] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.870 [2024-07-15 17:08:51.235521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.870 [2024-07-15 17:08:51.235538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.870 qpair failed and we were unable to recover it. 00:26:44.870 [2024-07-15 17:08:51.235713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.235726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.235975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.235988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.236170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.236184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.236310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.236324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.236440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.236454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.236577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.236592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.236771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.236785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.236897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.236911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.237011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.237027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.237190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.237204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.237388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.237403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.237555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.237571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.237672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.237686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.237850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.237865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.238047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.238077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.238306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.238338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.238491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.238521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.238805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.238837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.239078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.239108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.239368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.239383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.239637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.239651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.239817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.239830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.239998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.240035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.240287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.240322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.240560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.240590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.240760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.240789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.241042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.241071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.241340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.241355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.241535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.241552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.241776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.241789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.241916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.241929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.242041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.242055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.242230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.242245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.242415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.242446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.242583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.242612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.242776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.242806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.243025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.243055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.243208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.243248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.243414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.243444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.243715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.243745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.871 qpair failed and we were unable to recover it. 00:26:44.871 [2024-07-15 17:08:51.243970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.871 [2024-07-15 17:08:51.244000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.244219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.244260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.244475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.244489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.244611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.244625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.244813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.244827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.245008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.245022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.245269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.245302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.245573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.245602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.245810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.245841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.246105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.246135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.246425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.246456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.246672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.246701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.246969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.246999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.247137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.247167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.247371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.247386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.247656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.247670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.247825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.247839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.248016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.248030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.248204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.248218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.248350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.248364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.248533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.248547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.248728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.248741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.248822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.248835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.249016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.249029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.249151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.249166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.249396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.249427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.249639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.249668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.249868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.249897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.250055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.250071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.250197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.250211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.250415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.250430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.250505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.250518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.250633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.872 [2024-07-15 17:08:51.250647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-07-15 17:08:51.250909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.250940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.251089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.251119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.251261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.251292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.251444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.251473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.251754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.251784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.251934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.251948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.252111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.252149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.252386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.252417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.252621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.252651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.252866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.252897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.253095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.253125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.253339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.253353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.253547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.253577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.253798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.253828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.254031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.254061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.254205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.254220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.254339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.254353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.254529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.254543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.254701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.254715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.254899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.254913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.255055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.255070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.255192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.255206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.255398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.255427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.255619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.255653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.255875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.255906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.256050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.256084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.256240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.256251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.256444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.256474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.256768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.256798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.256991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.257021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.257269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.257300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.257498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.257528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.257679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.257708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.258000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.258030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.258187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.258197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.258362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.258410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.258611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.258641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.258819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.258849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.259128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.259139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.259377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.873 [2024-07-15 17:08:51.259387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-07-15 17:08:51.259648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.259659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.259770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.259780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.259890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.259900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.260092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.260103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.260300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.260311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.260471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.260504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.260652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.260680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.260848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.260877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.261032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.261062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.261236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.261267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.261491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.261521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.261662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.261691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.261891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.261921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.262132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.262162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.262323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.262355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.262501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.262531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.262733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.262763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.262894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.262924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.263067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.263095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.263309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.263321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.263537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.263567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.263763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.263792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.874 [2024-07-15 17:08:51.264124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.264191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.264485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.264554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.264811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.264847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.265003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.265017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.265302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.265332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.265641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.265671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.265819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.265848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.266055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.266085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.266314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.266328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.266491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.266504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.266620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.266634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.266880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.266908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.267128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.267157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.267319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.267363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.267577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.267606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.267872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.267901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.268186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.268215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.268410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.874 [2024-07-15 17:08:51.268424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-07-15 17:08:51.268512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.268526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.268653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.268667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.268840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.268854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.268966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.268979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.269165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.269179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.269317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.269331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.269599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.269613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.269793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.269807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.269929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.269942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.270058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.270072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.270315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.270329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.270517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.270530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.270666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.270680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.270845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.270859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.271027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.271041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.271275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.271289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.271413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.271426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.271605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.271618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.271741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.271755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.271932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.271946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.272120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.272133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.272322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.272336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.272518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.272531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.272637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.272651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.272763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.272777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.272955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.272969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.273045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.273058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.273257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.273271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.273393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.273407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.273581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.273595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.273707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.273720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.273909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.273922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.274089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.274103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.274213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.274231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.274461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.274475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.274668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.274684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.274855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.274869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.275070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.275083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.275179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.275195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.275357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.275371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.875 [2024-07-15 17:08:51.275642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.875 [2024-07-15 17:08:51.275655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.875 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.275775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.275788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.276000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.276013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.276093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.276105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.276251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.276265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.276456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.276470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.276678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.276692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.276802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.276816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.276994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.277006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.277166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.277176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.277354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.277365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.277634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.277643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.277727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.277736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.277892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.277902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.278057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.278067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.278313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.278324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.278496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.278506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.278579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.278589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.278745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.278755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.278913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.278923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.279027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.279037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.279194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.279204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.279438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.279449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.279556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.279566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.279736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.279746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.279917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.279926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.280033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.280043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.280297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.280307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.280471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.280481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.280586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.280596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.280769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.280779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.280848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.280858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.280973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.280983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.281153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.281162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.281262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.281275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.281379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.281391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.281493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.281502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.281679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.281689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.281851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.281861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.282084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.282094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.282220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.876 [2024-07-15 17:08:51.282234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.876 qpair failed and we were unable to recover it. 00:26:44.876 [2024-07-15 17:08:51.282339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.282351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.282451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.282461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.282629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.282640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.282864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.282875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.282983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.282993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.283150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.283160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.283344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.283354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.283462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.283472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.283623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.283633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.283756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.283766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.283904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.283915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.284115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.284126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.284242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.284253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.284451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.284462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.284641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.284652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.284899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.284909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.285131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.285141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.285316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.285326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.285470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.285480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.285653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.285663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.285770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.285780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.285951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.285961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.286073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.286083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.286249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.286260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.286505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.286516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.286693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.286704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.286816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.286825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.287037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.287047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.287193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.287202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.287377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.287387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.287543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.287554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.287803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.287813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.287903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.287912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.288015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.288025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.288187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.288200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.288319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.877 [2024-07-15 17:08:51.288329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.877 qpair failed and we were unable to recover it. 00:26:44.877 [2024-07-15 17:08:51.288496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.288506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.288682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.288693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.288785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.288794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.288921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.288930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.289092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.289102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.289269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.289280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.289529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.289539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.289734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.289744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.289835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.289844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.289966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.289977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.290081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.290090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.290186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.290196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.290369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.290380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.290547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.290557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.290716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.290726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.290887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.290897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.290979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.290991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.291131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.291141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.291357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.291377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.291519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.291530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.291599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.291608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.291717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.291728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.291865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.291875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.291974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.291983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.292090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.292100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.292199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.292209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.292370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.292381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.292490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.292500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.292728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.292738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.292905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.292915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.293114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.293124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.293212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.293222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.293331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.293341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.293448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.293459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.293555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.293565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.293732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.293742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.293842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.293853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.294063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.294073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.294162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.294174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.294281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.878 [2024-07-15 17:08:51.294291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.878 qpair failed and we were unable to recover it. 00:26:44.878 [2024-07-15 17:08:51.294401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.294411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.294507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.294517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.294628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.294638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.294730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.294740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.294916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.294925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.295028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.295038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.295160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.295170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.295275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.295285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.295445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.295455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.295564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.295573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.295675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.295685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.295865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.295875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.295976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.295987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.296216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.296230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.296397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.296407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.296498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.296514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.296599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.296612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.296700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.296711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.296890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.296900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.297121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.297131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.297301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.297313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.297420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.297430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.297524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.297534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.297623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.297633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.297803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.297814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.297988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.298008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.298111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.298125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.298293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.298308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.298437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.298451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.298567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.298580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.298732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.298746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.298861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.298872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.299125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.299135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.299239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.299250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.299410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.299420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.299685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.299695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.299792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.299803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.300001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.300011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.300121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.300133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.300237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.300248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.300439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.300449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.879 qpair failed and we were unable to recover it. 00:26:44.879 [2024-07-15 17:08:51.300533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.879 [2024-07-15 17:08:51.300543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.300636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.300647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.300829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.300840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.300953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.300964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.301067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.301077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.301181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.301192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.301352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.301363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.301455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.301467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.301582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.301592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.301699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.301709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.301801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.301811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.301989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.301999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.302091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.302102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.302287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.302298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.302462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.302472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.302572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.302582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.302752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.302763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.302877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.302888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.303138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.303149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.303271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.303282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.303441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.303452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.303719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.303729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.303890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.303900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.304004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.304015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.304117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.304128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.304381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.304391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.304567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.304578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.304748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.304759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.305020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.305030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.305131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.305141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.305233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.305244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.305333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.305343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.305450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.305460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.305685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.305695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.305922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.305932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.306056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.306067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.306221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.306235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.306334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.306346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.306510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.306521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.306746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.306756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.306867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.306878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.307101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.307111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.307220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.307235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.307399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.307410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.307504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.880 [2024-07-15 17:08:51.307514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.880 qpair failed and we were unable to recover it. 00:26:44.880 [2024-07-15 17:08:51.307695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.307705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.307813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.307823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.307930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.307940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.308059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.308068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.308223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.308239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.308400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.308410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.308519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.308530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.308642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.308652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.308788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.308797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.308904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.308914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.309021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.309032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.309215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.309228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.309308] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:44.881 [2024-07-15 17:08:51.309332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.309343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.309502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.309512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.309676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.309686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.309911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.309922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.310041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.310051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.310157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.310167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.310264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.310275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.310356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.310366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.310566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.310577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.310802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.310812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.311036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.311046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.311143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.311152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.311360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.311370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.311474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.311484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.311637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.311647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.311837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.311847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.311938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.311949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.312111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.312121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.312240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.312251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.312433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.312442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.312631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.312641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.312865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.312876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.312991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.313002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.313196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.881 [2024-07-15 17:08:51.313207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.881 qpair failed and we were unable to recover it. 00:26:44.881 [2024-07-15 17:08:51.313316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.313327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.313446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.313457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.313628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.313638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.313738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.313748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.313906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.313916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.314053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.314064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.314242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.314253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.314366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.314376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.314474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.314484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.314654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.314669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.314836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.314847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.315043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.315054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.315236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.315247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.315495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.315506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.315618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.315628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.315802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.315813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.315992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.316003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.316161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.316172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.316340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.316351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.316478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.316488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.316589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.316599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.316768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.316779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.316938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.316949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.317120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.317131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.317287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.317298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.317467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.317478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.317636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.317650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.317806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.317817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.318000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.318012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.318156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.318168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.318342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.318354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.318466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.318477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.318635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.318647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.318747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.318757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.318867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.318879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.319048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.319060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.319220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.319236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.319374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.319385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.319606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.319616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.319723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.319733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.319933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.319944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.320136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.320147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.320354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.320364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.320581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.320593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.320752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.320762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.320993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.321003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.882 [2024-07-15 17:08:51.321252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.882 [2024-07-15 17:08:51.321263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.882 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.321342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.321354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.321472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.321482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.321602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.321612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.321809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.321819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.322078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.322089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.322186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.322196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.322361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.322372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.322612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.322622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.322743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.322753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.322863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.322874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.323045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.323056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.323186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.323197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.323419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.323430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.323603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.323614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.323847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.323858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.324025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.324035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.324151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.324161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.324265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.324275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.324381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.324391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.324495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.324505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.324748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.324759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.324857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.324867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.324968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.324979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.325059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.325068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.325186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.325197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.325371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.325382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.325612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.325623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.325801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.325811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.326034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.326044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.326145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.326157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.326314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.326324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.326442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.326453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.326575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.326586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.326708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.326719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.326967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.326977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.327170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.327181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.327284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.327293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.327471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.327481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.327578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.327589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.327759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.327769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.327942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.327953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.328175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.328185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.328299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.328310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.883 [2024-07-15 17:08:51.328412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.883 [2024-07-15 17:08:51.328422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.883 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.328622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.328633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.328810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.328820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.328988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.328998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.329103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.329114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.329237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.329248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.329414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.329425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.329534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.329544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.329652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.329662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.329920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.329931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.330106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.330116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.330218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.330232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.330390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.330400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.330594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.330605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.330779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.330789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.330968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.330981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.331105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.331116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.331222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.331236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.331416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.331427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.331602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.331612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.331864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.331875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.331977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.331988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.332159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.332170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.332264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.332275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.332394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.332403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.332573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.332583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.332679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.332691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.332872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.332882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.332989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.333000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.333191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.333201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.333309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.333319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.333540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.333551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.333644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.333654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.333748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.333759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.333947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.333957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.334178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.334189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.334366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.334377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.334543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.334554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.334662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.334673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.334773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.334784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.335016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.335027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.335189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.335199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.335446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.335457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.335654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.335664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.335835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.335846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.335947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.335958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.336036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.336046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.336220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.884 [2024-07-15 17:08:51.336235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.884 qpair failed and we were unable to recover it. 00:26:44.884 [2024-07-15 17:08:51.336357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.336368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.336537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.336547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.336684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.336695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.336927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.336937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.337018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.337027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.337141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.337152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.337251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.337261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.337378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.337389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.337546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.337557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.337720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.337731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.337895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.337905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.338131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.338142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.338315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.338325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.338436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.338447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.338538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.338548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.338705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.338715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.338877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.338888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.338996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.339007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.339105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.339117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.339343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.339353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.339507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.339517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.339680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.339691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.339818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.339828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.339920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.339931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.340050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.340061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.340253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.340264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.340435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.340446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.340584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.340594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.340768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.340778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.340933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.340944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.341038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.341049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.341203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.341214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.341329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.341340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.341435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.341447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.341620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.341630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.341855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.341866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.342041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.342052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.342190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.342201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.342297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.342308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.342441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.342451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.342621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.342632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.342737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.342747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.885 qpair failed and we were unable to recover it. 00:26:44.885 [2024-07-15 17:08:51.342904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.885 [2024-07-15 17:08:51.342915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.343017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.343027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.343195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.343206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.343439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.343450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.343687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.343698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.343868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.343878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.344049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.344059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.344167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.344176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.344354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.344365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.344470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.344480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.344578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.344588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.344753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.344764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.344917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.344928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.345159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.345169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.345393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.345404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.345566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.345580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.345686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.345703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.345864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.345880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.346002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.346015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.346129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.346141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.346316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.346328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.346424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.346437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.346606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.346619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.346789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.346803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.346968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.346983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.347141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.347156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.347245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.347256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.347361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.347373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.347608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.347621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.347729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.347739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.348000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.348012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.348237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.348248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.348488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.348500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.348666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.348677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.348778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.348790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.348894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.348905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.349072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.349083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.349237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.349249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.349440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.349451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.349646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.349657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.349882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.349896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.350062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.350073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.350178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.350190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.350300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.350312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.350503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.350514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.350610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.350622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.350846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.350857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.351022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.351033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.351134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.351145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.351302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.351314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.886 qpair failed and we were unable to recover it. 00:26:44.886 [2024-07-15 17:08:51.351480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.886 [2024-07-15 17:08:51.351491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.351716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.351728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.351886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.351897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.351994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.352005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.352094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.352105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.352295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.352307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.352471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.352485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.352651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.352663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.352916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.352927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.353082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.353093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.353186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.353197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.353399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.353411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.353638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.353650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.353816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.353826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.353927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.353937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.354101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.354113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.354286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.354298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.354441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.354452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.354613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.354623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.354722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.354733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.354908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.354919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.355096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.355106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.355263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.355273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.355430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.355440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.355618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.355627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.355714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.355725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.355966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.355976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.356144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.356154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.356361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.356371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.356544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.356553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.356633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.356642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.356811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.356821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.357004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.357014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.357169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.357179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.357283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.357294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.357482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.357491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.357723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.357733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.357932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.357942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.358058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.358068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.358175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.358184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.358383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.358393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.358513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.358523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.358697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.358707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.358866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.358876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.359037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.359046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.359326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.359336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.359454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.359466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.359569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.359578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.359682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.359691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.359798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.359808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.887 [2024-07-15 17:08:51.359979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.887 [2024-07-15 17:08:51.359989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.887 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.360143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.360153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.360262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.360272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.360464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.360474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.360627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.360637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.360737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.360747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.360838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.360848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.360969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.360978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.361138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.361148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.361239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.361248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.361353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.361363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.361631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.361641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.361723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.361731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.361978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.361988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.362213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.362223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.362341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.362351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.362522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.362532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.362644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.362654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.362763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.362773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.362938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.362948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.363060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.363070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.363187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.363198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.363306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.363317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.363418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.363428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.363496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.363506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.363704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.363714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.363873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.363883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.364058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.364067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.364238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.364248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.364352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.364363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.364451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.364461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.364557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.364567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.364723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.364733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.364892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.364902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.365054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.365064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.365289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.365300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.365520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.365532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.365626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.365635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.365744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.365754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.365854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.365865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.365980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.365990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.366189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.366199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.888 [2024-07-15 17:08:51.366363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.888 [2024-07-15 17:08:51.366374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.888 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.366537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.366547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.366724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.366734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.366974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.366983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.367172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.367181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.367338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.367349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.367518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.367529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.367706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.367716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.367818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.367828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.368001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.368011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.368116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.368126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.368216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.368229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.368408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.368419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.368582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.368593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.368770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.368780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.368887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.368896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.369119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.369129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.369232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.369242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.369468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.369478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.369583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.369593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.369693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.369703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.369795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.369805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.369910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.369920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.369982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.369991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.370084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.370093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.370361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.370371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.370551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.370561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.370670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.370681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.370775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.370785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.370891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.370901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.371061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.371071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.371184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.371194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.371284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.371294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.371391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.371401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.371491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.371503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.371669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.371679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.371837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.371847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.371947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.371956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.372047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.372056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.372218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.372232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.372395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.372405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.372564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.372574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.372676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.372686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.372804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.372814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.372932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.372942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.373103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.373113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.373172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.373188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.373294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.373304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.373466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.373477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.373640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.373650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.373821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.373830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.889 qpair failed and we were unable to recover it. 00:26:44.889 [2024-07-15 17:08:51.373928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.889 [2024-07-15 17:08:51.373937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.374042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.374052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.374143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.374153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.374280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.374290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.374474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.374483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.374585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.374594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.374686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.374695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.374761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.374770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.374962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.374972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.375146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.375155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.375347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.375358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.375458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.375468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.375631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.375641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.375727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.375737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.375896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.375906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.376170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.376180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.376273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.376283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.376457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.376466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.376636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.376646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.376806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.376815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.376931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.376941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.377100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.377110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.377208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.377219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.377482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.377496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.377654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.377664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.377838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.377848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.377923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.377932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.378025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.378035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.378138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.378148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.378324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.378334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.378509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.378519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.378754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.378764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.378924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.378933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.379044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.379054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.379165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.379175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.379272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.379282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.379387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.379396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.379623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.379632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.379763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.379773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.379865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.379876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.379971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.379981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.380232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.380245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.380344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.380354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.380599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.380611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.380789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.380800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.380969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.380979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.381230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.381240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.381429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.381439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.381660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.381670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.381894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.381904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.382113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.890 [2024-07-15 17:08:51.382147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.890 qpair failed and we were unable to recover it. 00:26:44.890 [2024-07-15 17:08:51.382285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.382302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.382534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.382548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.382735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.382750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.382888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.382903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.383026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.383041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.383240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.383256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.383388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.383402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.383572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.383586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.383707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.383721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.383821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.383836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.384023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.384037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.384209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.384222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.384360] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.891 [2024-07-15 17:08:51.384389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.891 [2024-07-15 17:08:51.384400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.891 [2024-07-15 17:08:51.384406] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.891 [2024-07-15 17:08:51.384410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.891 [2024-07-15 17:08:51.384434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.384449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.384475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:44.891 [2024-07-15 17:08:51.384655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.384666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:44.891 [2024-07-15 17:08:51.384670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:44.891 [2024-07-15 17:08:51.384560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.384668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:44.891 [2024-07-15 17:08:51.384864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.384890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.385108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.385121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.385302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.385313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.385510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.385520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.385766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.385777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.386077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.386086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.386281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.386291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.386516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.386527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.386701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.386711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.386956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.386966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.387157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.387167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.387369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.387380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.387633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.387643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.387882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.387892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.388142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.388153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.388403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.388414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.388609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.388621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.388868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.388878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.389099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.389109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.389267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.389277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.389474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.389483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.389676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.389686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.389887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.389899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.390128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.390138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.390315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.390327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.390577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.390587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.390861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.390872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.391081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.391091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.391343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.391354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.391629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.391640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.391908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.391919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.392110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.392120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.392397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.392408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.891 qpair failed and we were unable to recover it. 00:26:44.891 [2024-07-15 17:08:51.392580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.891 [2024-07-15 17:08:51.392590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.392764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.392774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.393046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.393057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.393300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.393311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.393586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.393597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.393805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.393816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.394076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.394087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.394311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.394322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.394440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.394451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.394613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.394623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.394860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.394871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.395053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.395064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.395351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.395362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.395575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.395586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.395731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.395742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.395930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.395941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.396216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.396232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.396387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.396398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.396598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.396609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.396776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.396786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.397042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.397059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.397229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.397240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.397453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.397463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.397643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.397654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.397900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.397911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.398160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.398170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.398330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.398342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.398465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.398476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.398728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.398739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.398922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.398935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.399110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.399121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.399376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.399387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.399633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.399645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.399757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.399768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.399924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.399936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.400184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.400195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.400423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.400435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.400638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.400650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.400897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.400908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.401152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.401165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.401418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.401430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.401676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.401688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.401888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.401898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.402013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.402024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.402194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.402205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.892 [2024-07-15 17:08:51.402458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.892 [2024-07-15 17:08:51.402470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.892 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.402650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.402662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.402781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.402791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.402947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.402958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.403124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.403134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.403403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.403415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.403598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.403609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.403798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.403809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.403970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.403981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.404088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.404099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.404324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.404336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.404611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.404624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.404873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.404885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.405160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.405171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.405419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.405430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.405654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.405665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.405920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.405931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.406184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.406194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.406440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.406451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.406611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.406621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.406794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.406804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.406976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.406986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.407271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.407282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.407389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.407400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.407571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.407585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.407759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.407771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.407930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.407940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.408166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.408176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.408336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.408347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.408590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.408600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.408827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.408838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.409087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.409098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.409330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.409341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.409613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.409624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.409814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.409825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.410028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.410038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.410207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.410217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.410393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.410403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.410726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.410737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.410909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.410919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.411170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.411181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.411308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.411320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.411565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.411575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.411751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.411762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.411943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.411954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.412071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.412082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.412231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.412242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.412510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.412521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.412690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.412701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.412891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.412902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.893 [2024-07-15 17:08:51.413124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.893 [2024-07-15 17:08:51.413134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.893 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.413265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.413275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.413441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.413451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.413647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.413657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.413904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.413915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.414095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.414105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.414308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.414319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.414570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.414580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.414758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.414768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.414956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.414967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.415217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.415232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.415458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.415469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.415693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.415704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.415812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.415823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.416071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.416084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.416344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.416355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.416564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.416575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.416815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.416826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.417106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.417116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.417282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.417292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.417467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.417478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.417654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.417665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.417917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.417927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.418181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.418193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.418453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.418463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.418636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.418647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.418809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.418819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.419044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.419054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.419223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.419237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.419396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.419407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.419577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.419588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.419830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.419841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.420039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.420051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.420208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.420218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.420399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.420409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.420608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.420618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.420815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.420827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.421019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.421030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.421183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.421193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.421373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.421383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.421592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.421602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.421848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.421859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.422033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.422044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.422240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.422251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.422406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.422416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.422685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.422695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.422807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.422817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.423093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.423103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.423269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.423279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.423436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.423447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.423579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.894 [2024-07-15 17:08:51.423589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.894 qpair failed and we were unable to recover it. 00:26:44.894 [2024-07-15 17:08:51.423780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.423791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.423991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.424001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.424173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.424183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.424288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.424306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.424497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.424508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.424734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.424744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.424945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.424955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.425132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.425142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.425341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.425352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.425612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.425623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.425908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.425918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.426175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.426185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.426409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.426420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.426669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.426679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.426938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.426948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.427121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.427131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.427326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.427336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.427593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.427603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.427883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.427893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.428169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.428180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.428351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.428362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.428609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.428620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.428868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.428879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.429130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.429141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.429374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.429386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.429641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.429653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.429878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.429890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.430124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.430136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.430316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.430327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.430503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.430513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.430641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.430652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.430873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.430885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.431027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.431039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.431291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.431302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.431532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.431543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.431716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.431726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.431856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.431866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.432064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.432074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.432304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.432314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.432490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.432500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.432726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.432738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.432960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.432971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.433262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.433273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.433499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.433512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.433707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.895 [2024-07-15 17:08:51.433719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.895 qpair failed and we were unable to recover it. 00:26:44.895 [2024-07-15 17:08:51.433882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.433892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.434049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.434058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.434222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.434237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.434410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.434420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.434595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.434605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.434703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.434713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.434826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.434836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.435054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.435065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.435223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.435238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.435373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.435383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.435566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.435577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.435693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.435704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.435940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.435950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.436142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.436153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.436377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.436388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.436646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.436657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.436821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.436831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.437077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.437088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.437289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.437301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.437544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.437554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.437755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.437765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.438013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.438023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.438287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.438298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.438428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.438438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.438670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.438680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.438798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.438809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.439072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.439082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.439333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.439343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.439515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.439525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.439630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.439640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.439752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.439762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.439990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.440000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.440170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.440181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.440437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.440448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.440644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.440654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.440875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.440885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.441108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.441118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.441301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.441311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.441558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.441570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.441763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.441774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.441961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.441971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.442177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.442188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.442484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.442494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.442618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.442629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.442875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.442885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.443050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.443060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.443282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.443292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-07-15 17:08:51.443518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-07-15 17:08:51.443528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.443693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.443703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.443938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.443951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.444203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.444214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.444466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.444476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.444687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.444698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.444891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.444902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.445152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.445162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.445258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.445268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.445445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.445455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.445579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.445589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.445689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.445699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.445858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.445868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.446069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.446079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.446308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.446319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.446576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.446586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.446713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.446724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.446990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.447000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.447192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.447202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.447391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.447402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.447579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.447589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.447812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.447822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.448035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.448045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.448294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.448305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.448419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.448430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.448614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.448625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.448737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.448747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.448969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.448979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.449256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.449267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.449517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.449528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.449701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.449711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.449943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.449955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.450069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.450078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.450346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.450357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.450601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.450611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.450796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.450807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.451035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.451045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.451305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.451315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.451538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.451550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.451728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.451738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.451896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.451906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.452081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.452091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.452259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.452269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.452440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.452450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.452555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.452565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.452671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.452681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.452966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.452976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.453163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.453173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.453352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.453362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.453570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.453580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-07-15 17:08:51.453775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-07-15 17:08:51.453784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.453954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.453964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.454234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.454245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.454355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.454365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.454611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.454621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.454735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.454745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.454902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.454912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.455077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.455088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.455262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.455273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.455369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.455382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.455497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.455508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.455617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.455627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.455852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.455862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.456047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.456057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.456220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.456234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.456346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.456356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.456589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.456600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.456772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.456783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.457057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.457067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.457290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.457300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.457521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.457531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.457774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.457785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.458066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.458076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.458247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.458258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.458414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.458423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.458665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.458675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.458856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.458866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.458953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.458962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.459235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.459245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.459468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.459477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.459670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.459680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.459845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.459856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.460046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.460056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.460244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.460254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.460351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.460360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.460537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.460547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.460779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.460790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.461070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.461080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.461196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.461206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.461319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.461329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.461551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.461561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.461727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.461737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.461964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.461974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.462077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.462087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.462259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.462269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.462446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.462456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.462574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.462584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.462758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.462768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.462961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.462971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-07-15 17:08:51.463129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-07-15 17:08:51.463139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.463342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.463352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.463598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.463608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.463734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.463744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.464013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.464023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.464168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.464178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.464384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.464394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.464587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.464597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.464820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.464830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.465052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.465062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.465307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.465317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.465538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.465548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.465790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.465803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.466075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.466085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.466358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.466368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.466489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.466499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.466664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.466674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.466792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.466802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.467043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.467053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.467164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.467174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.467425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.467436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.467685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.467696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.467807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.467817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.467925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.467935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.468109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.468119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.468363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.468374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.468591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.468601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.468763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.468773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.468968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.468977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.469085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.469095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.469286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.469297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.469454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.469464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.469686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.469696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.469959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.469969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.470134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.470144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.470343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.470353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.470548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.470558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.470736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.470746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.470857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.470867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.471038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.471048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.471220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.471234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.471459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.471469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.471639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.471649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.471919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.471929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.472056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.472066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-07-15 17:08:51.472312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-07-15 17:08:51.472323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.472434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.472444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.472603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.472613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.472768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.472778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.472950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.472960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.473155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.473165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.473333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.473344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.473580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.473592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.473832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.473842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.474002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.474012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.474170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.474180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.474437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.474448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.474713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.474723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.474844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.474855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.474966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.474976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.475222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.475236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.475352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.475362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.475532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.475541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.475715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.475725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.475819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.475828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.476031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.476041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.476169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.476179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.476408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.476419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.476640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.476650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.476759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.476770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.477012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.477023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.477195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.477205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.477531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.477541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.477794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.477804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.478021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.478031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.478210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.478219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.478447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.478457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.478722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.478732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.478840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.478850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.479020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.479030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.479202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.479212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.479391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.479401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.479600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.479610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.479857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.479867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.480036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.480045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.480290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.480301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.480405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.480418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.480663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.480673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.480841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.480851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.481117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.481127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.481375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.481386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.481552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.481563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.481671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.481683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.481918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.481928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.482050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-07-15 17:08:51.482060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-07-15 17:08:51.482215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.482229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.482456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.482466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.482714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.482724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.482916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.482926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.483020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.483030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.483252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.483262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.483458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.483468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.483607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.483617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.483826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.483836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.484064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.484075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.484325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.484335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.484590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.484601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.484839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.484850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.484969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.484980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.485177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.485187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.485368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.485379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.485627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.485637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.485857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.485867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.486038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.486047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.486276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.486287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.486454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.486464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.486670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.486680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.486907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.486917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.487152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.487162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.487344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.487377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.487565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.487579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.487743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.487757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.487875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.487889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.488094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.488108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.488236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.488250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.488425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.488438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.488568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.488582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.488783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.488797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.488973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.488987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.489237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.489251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.489444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.489457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.489567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.489581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.489797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.489810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.490050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.490063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.490151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.490163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.490365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.490377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.490467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.490476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.490639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.490649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.490821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.490831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.490992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.491002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.491120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.491130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.491321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.491331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.491420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.491429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.491546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.491556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.491801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.491811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.491923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-07-15 17:08:51.491933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-07-15 17:08:51.492090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.492100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.492257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.492268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.492371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.492380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.492554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.492564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.492670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.492680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.492750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.492760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.492932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.492942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.493106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.493116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.493230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.493241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.493389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.493399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.493534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.493544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.493645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.493655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.493825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.493835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.493925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.493936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.494111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.494121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.494229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.494239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.494350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.494359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.494519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.494529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.494648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.494658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.494757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.494767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.494988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.494998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.495245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.495255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.495405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.495415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.495536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.495546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.495646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.495656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.495758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.495768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.495926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.495936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.496164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.496174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.496263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.496272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.496436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.496446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.496561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.496571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.496677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.496687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.496780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.496793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.497041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.497051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.497222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.497235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.497426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.497436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.497582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.497592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.497704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.497713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.497945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.497955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.498236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.498247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.498343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.498353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.498491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.498501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.498597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.498606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.498776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.498785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-07-15 17:08:51.498903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-07-15 17:08:51.498913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.499161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.499171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.499277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.499286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.499513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.499523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.499713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.499723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.499902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.499912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.500158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.500180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.500277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.500287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.500529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.500539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.500733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.500745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.500945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.500954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.501069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.501079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.501194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.501204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.501362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.501372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.501597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.501607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.501713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.501723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.501955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.501965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.502125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.502135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.502247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.502257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.502428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.502438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.502610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.502620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.502822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.502832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.502949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.502959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.503144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.503154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.503266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.503277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.503443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.503452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.503673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.503683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.503880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.503890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.504061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.504071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.504241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.504252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.504446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.504456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.504681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.504691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.504931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.504940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.505170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.505180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.505283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.505293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.505463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.505474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.505720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.505730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.505961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.505971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.506147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.506157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.506273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.506284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.506385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.506395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.506616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.506626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.506851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.506860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.506961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.506970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.507165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.507175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.507282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.507295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.507414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.507424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.507544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.507554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.507736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.507746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.507917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.507929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.508047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.508057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-07-15 17:08:51.508235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-07-15 17:08:51.508245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.508419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.508429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.508582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.508592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.508746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.508756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.508917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.508928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.509149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.509159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.509401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.509411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.509635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.509644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.509738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.509747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.509970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.509980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.510139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.510149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.510358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.510368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.510598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.510609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.510721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.510731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.510919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.510928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.511118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.511128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.511287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.511297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.511457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.511467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.511535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.511544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.511698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.511708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.511827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.511837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.511998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.512007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.512110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.512119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.512235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.512246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.512497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.512507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.512630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.512640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.512744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.512754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.512869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.512879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.513001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.513011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.513116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.513126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.513282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.513293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.513389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.513400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.513526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.513536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.513747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.513757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.514026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.514036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.514259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.514269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.514552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.514562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.514667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.514676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.514788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.514800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.514992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.515002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.515117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.515127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.515297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.515307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.515421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.515432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.515529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.515539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-07-15 17:08:51.515642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-07-15 17:08:51.515652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.515754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.515765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.515936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.515946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.516170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.516182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.516340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.516351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.516503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.516513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.516734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.516745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.516868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.516878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.517038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.517048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.517204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.517214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.517442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.517453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.517560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.517570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.517679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.517688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.517765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.517774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.517956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.517966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.518082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.518092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.518342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.518353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.518512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.518522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.518614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.518622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.518731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.518741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.518916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.518926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.519105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.519115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.519216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.519229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.519383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.519392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.519498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.519508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.519614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.519624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.519805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.519815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.519920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.519929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.520088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.520099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.520201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.520211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.520307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.520316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.520472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.520482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.520646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.520655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.520814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.520824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.520922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.520934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.521093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.521103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.521200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.521210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.521318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.521329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.521493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.521503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.521615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.521625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.521801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.521811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.521927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.521937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.522034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.522043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.522269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.522280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.522526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.522536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.522644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.522653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.522814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.522823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.188 qpair failed and we were unable to recover it. 00:26:45.188 [2024-07-15 17:08:51.522915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.188 [2024-07-15 17:08:51.522924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.523131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.523141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.523340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.523350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.523524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.523533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.523712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.523722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.523881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.523891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.523992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.524002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.524280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.524291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.524458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.524469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.524651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.524660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.524755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.524765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.524882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.524893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.525115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.525124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.525229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.525239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.525398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.525408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.525512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.525521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.525612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.525622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.525780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.525790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.525885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.525895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.526088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.526098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.526332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.526342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.526460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.526470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.526571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.526580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.526738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.526747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.526851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.526861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.527083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.527093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.527266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.527276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.527442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.527453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.527609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.527619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.527713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.527723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.527830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.527839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.527941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.527951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.528057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.528067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.528227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.528237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.528346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.528356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.528491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.528501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.528663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.528674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.528827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.528837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.529013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.529023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.529185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.529195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.529380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.529390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.529541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.529551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.529726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.529736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.529856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.529865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.530065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.530075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.530245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.530256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.530428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.530438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.530688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.530698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.530889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.530899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.531082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.531092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.531338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.531348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.531466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.531475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.531660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.531670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.531784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.531793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.189 qpair failed and we were unable to recover it. 00:26:45.189 [2024-07-15 17:08:51.531963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.189 [2024-07-15 17:08:51.531973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.532077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.532087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.532222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.532235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.532393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.532403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.532571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.532581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.532734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.532745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.532899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.532909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.533067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.533076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.533257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.533267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.533399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.533409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.533574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.533583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.533676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.533686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.533909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.533919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.534141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.534152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.534321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.534332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.534421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.534431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.534598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.534608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.534772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.534782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.534950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.534960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.535063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.535073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.535241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.535251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.535412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.535422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.535579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.535589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.535768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.535777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.535872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.535882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.535947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.535956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.536114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.536124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.536288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.536298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.536443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.536453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.536621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.536631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.536808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.536818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.536933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.536943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.537110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.537120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.537197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.537206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.537300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.537309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.537532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.537542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.537657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.537666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.537828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.537838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.537973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.537983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.538151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.538161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.538289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.538315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.538554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.538568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.538805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.538819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.538940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.538954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.539058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.539072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.539249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.539263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.539426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.539440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.539671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.539684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.539801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.539815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.539923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.539937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.540192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.540206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.540469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.540483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.540625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.540639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.540760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.540773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.190 [2024-07-15 17:08:51.540960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.190 [2024-07-15 17:08:51.540974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.190 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.541096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.541110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.541215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.541234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.541356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.541370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.541563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.541577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.541691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.541704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.541956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.541970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.542048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.542060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.542340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.542354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.542471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.542485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.542650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.542663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.542838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.542852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.543032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.543046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.543152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.543169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.543334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.543348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.543452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.543465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.543559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.543573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.543680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.543693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.543929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.543943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.544113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.544126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.544384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.544398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.544568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.544582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.544693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.544706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.544788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.544801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.544917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.544931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.545045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.545058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.545242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.545256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.545510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.545524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.545708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.545722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.545913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.545926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.546148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.546162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.546290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.546304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.546558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.546571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.546751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.546764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.546992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.547005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.547074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.547087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.547255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.547269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.547443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.547456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.547687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.547700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.547878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.547892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.548014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.548030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.548143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.548156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.548287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.548302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.548403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.548416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.548577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.548590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.548838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.548851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.549052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.549065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.549192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.549205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-07-15 17:08:51.549371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-07-15 17:08:51.549384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.549626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.549640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.549755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.549769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.549961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.549974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.550150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.550163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.550363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.550377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.550552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.550564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.550786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.550795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.550896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.550906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.551016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.551026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.551182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.551193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.551296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.551307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.551505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.551515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.551640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.551650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.551826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.551836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.551927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.551937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.552102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.552111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.552360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.552370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.552494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.552504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.552624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.552636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.552866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.552876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.552979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.552989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.553156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.553166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.553266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.553276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.553382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.553392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.553498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.553508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.553628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.553638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.553718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.553727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.553895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.553905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.554064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.554074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.554186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.554196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.554355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.554365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.554525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.554534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.554726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.554735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.554910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.554919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.555089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.555099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.555214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.555226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.555318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.555327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.555428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.555437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.555682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.555692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.555887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.555897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.556053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.556063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.556221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.556237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.556325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.556334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.556405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.556414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.556575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.556584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.556691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.556701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.556926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.556936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.557022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.557031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.557193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.557203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.557302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.557312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.557401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.557411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.557545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.557555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.557668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.557679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.557791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.557801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.558042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.558052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-07-15 17:08:51.558227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-07-15 17:08:51.558238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.558360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.558370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.558536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.558546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.558706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.558718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.558887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.558898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.559107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.559116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.559276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.559287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.559454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.559463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.559617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.559627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.559798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.559809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.559978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.559988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.560146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.560156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.560333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.560343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.560626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.560636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.560739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.560749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.560923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.560933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.561038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.561048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.561152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.561162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.561321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.561331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.561431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.561441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.561546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.561555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.561716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.561725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.561890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.561900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.561997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.562007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.562168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.562178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.562302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.562313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.562434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.562444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.562606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.562616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.562767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.562777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.562947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.562957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.563128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.563138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.563383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.563393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.563552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.563562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.563789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.563799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.563968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.563978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.564085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.564095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.564252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.564262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.564372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.564381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.564488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.564498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.564652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.564662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.564827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.564837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.565008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.565018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.565244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.565254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.565351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.565363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.565476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.565486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.565593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.565603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.565721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.565731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.565917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.565927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.566095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.566105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.566352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.566362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.566471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.566480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.566646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.566655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.566761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.566771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.567018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.567028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-07-15 17:08:51.567121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-07-15 17:08:51.567131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.567221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.567235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.567404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.567414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.567624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.567634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.567753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.567762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.567863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.567873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.567989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.567998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.568169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.568178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.568285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.568295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.568460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.568469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.568689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.568699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.568925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.568935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.569035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.569045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.569150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.569160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.569327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.569337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.569426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.569437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.569631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.569641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.569757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.569767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.569904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.569913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.570060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.570070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.570328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.570338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.570460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.570470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.570640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.570649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.570775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.570785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.570951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.570961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.571206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.571216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.571321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.571331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.571441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.571451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.571559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.571569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.571683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.571695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.571889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.571899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.572063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.572073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.572181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.572190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.572358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.572368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.572527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.572537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.572782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.572791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.572917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.572927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.573035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.573045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.573146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.573156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.573381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.573391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.573480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.573489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.573591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.573601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.573709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.573720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.573808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.573818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.573980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.573990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.574085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.574095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.574316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.574326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.574523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.574532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-07-15 17:08:51.574690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-07-15 17:08:51.574700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.574799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.574808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.574918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.574928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.575105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.575114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.575189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.575199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.575392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.575402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.575562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.575571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.575796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.575806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.575894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.575906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.576077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.576086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.576253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.576263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.576376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.576385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.576523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.576533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.576690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.576700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.576816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.576826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.577003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.577012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.577129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.577139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.577372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.577382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.577609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.577619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.577791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.577800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.577908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.577917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.578082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.578091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.578180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.578190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.578365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.578375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.578480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.578490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.578764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.578773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.578874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.578884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.579017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.579027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.579167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.579177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.579334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.579344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.579436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.579445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.579638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.579648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.579742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.579752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.579928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.579937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.580038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.580048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.580270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.580281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.580372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.580381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.580452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.580461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.580613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.580623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.580711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.580721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.580877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.580887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.580993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.581003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.581169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.581179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.581340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.581350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.581519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.581529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.581698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.581707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.581815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.581825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.581926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.581936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.582050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.582063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.582155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.582165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.582275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.582285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.582383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.582393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.582557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.582567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.582787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.582797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.582911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.582920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.583020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-07-15 17:08:51.583030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-07-15 17:08:51.583186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.583196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.583311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.583322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.583424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.583434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.583622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.583631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.583855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.583864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.584091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.584101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.584270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.584281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.584435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.584445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.584698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.584708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.584895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.584905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.585069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.585078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.585167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.585177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.585297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.585307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.585530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.585540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.585644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.585653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.585825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.585835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.586062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.586072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.586243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.586254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.586499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.586509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.586696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.586706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.586881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.586891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.587017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.587027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.587119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.587129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.587285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.587295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.587391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.587401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.587556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.587565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.587731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.587741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.587935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.587945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.588047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.588057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.588179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.588189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.588435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.588445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.588565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.588575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.588739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.588751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.588925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.588934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.589104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.589114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.589270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.589280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.589402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.589412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.589574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.589584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.589691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.589701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.589951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.589961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.590126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.590136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.590294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.590304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.590458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.590468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.590585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.590595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.590855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.590865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.591019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.591028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.591187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.591197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.591354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.591364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.591532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.591542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.591792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.591801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.591949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.591958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.592192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.592202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.592382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.592392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.592497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.592507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.592725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-07-15 17:08:51.592735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-07-15 17:08:51.592975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.592984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.593079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.593088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.593355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.593366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.593487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.593497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.593609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.593619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.593707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.593716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.593878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.593888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.594008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.594017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.594191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.594201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.594450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.594460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.594553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.594563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.594744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.594755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.594857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.594867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.595025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.595035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.595149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.595158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.595249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.595258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.595409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.595418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.595574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.595585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.595694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.595704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.595808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.595818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.595884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.595893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.596048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.596059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.596302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.596312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.596460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.596470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.596625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.596635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.596826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.596836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.597017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.597027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.597125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.597135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.597232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.597241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.597395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.597405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.597583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.597594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.597788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.597798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.597917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.597927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.598089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.598099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.598269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.598279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.598471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.598481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.598647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.598657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.598848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.598858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.599028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.599037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.599143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.599153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.599283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.599293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.599496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.599506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.599723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.599732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.599903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.599913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.600071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.600081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.600186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.600196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.600305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.600316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.600414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-07-15 17:08:51.600423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-07-15 17:08:51.600586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.600596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.600828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.600838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.601056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.601066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.601172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.601182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.601345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.601355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.601595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.601605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.601775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.601785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.601999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.602009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.602177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.602187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.602361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.602373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.602543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.602553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.602645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.602654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.602813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.602823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.603010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.603019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.603132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.603142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.603257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.603268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.603470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.603480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.603598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.603608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.603706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.603717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.603965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.603975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.604223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.604242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.604423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.604433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.604520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.604529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.604759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.604769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.604926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.604935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.605195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.605204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.605314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.605324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.605547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.605556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.605669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.605679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.605791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.605800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.605908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.605918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.606077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.606087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.606177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.606186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.606411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.606421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.606578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.606588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.606692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.606701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.606794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.606804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.606997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.607007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.607199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.607209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.607366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.607376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.607472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.607484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.607668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.607678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.607776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.607787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.607890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.607900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.608053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.608062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.608285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.608295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.608490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.608501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.608749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.608759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.608924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.608933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.609024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.609035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.609153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.609163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.609253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.609262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.609437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.609447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.609552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.609561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.609732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-07-15 17:08:51.609742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-07-15 17:08:51.609811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.609820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.609917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.609927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.610082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.610092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.610289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.610299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.610459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.610469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.610575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.610585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.610769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.610778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.610878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.610888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.611130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.611140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.611311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.611321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.611479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.611489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.611737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.611747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.611929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.611939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.612041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.612051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.612222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.612243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.612485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.612495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.612745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.612755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.612923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.612933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.613154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.613164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.613415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.613425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.613515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.613524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.613638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.613648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.613873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.613883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.614077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.614086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.614243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.614253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.614357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.614367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.614473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.614483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.614657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.614667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.614902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.614912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.615086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.615096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.615185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.615194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.615365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.615375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.615623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.615633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.615745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.615755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.615880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.615893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.616089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.616099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.616265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.616276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.616499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.616508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.616759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.616768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.616955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.616965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.617073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.617083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.617273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.617283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.617402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.617412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.617569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.617579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.617759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.617769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.617875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.617885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.618081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.618090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.618287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.618297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.618469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.618479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.618585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.618595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.618763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.618772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.618837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.618846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.619001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.619011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.619241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.619251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.619352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.619362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-07-15 17:08:51.619478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-07-15 17:08:51.619487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.619588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.619598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.619710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.619720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.619876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.619886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.620001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.620011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.620252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.620262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.620464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.620474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.620662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.620672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.620836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.620846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.620944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.620954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.621146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.621155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.621377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.621387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.621586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.621596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.621820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.621829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.621922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.621931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.622154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.622164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.622318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.622329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.622500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.622511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.622677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.622687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.622800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.622812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.622968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.622978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.623099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.623108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.623213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.623222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.623382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.623392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.623557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.623567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.623758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.623768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.623869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.623879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.624035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.624044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.624270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.624280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.624458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.624467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.624575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.624585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.624694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.624704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.624906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.624915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.625104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.625114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.625285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.625295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.625545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.625555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.625649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.625660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.625814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.625824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.625986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.625996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.626162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.626172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.626400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.626410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.626583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.626593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.626701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.626711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.626815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.626825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.626993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.627003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.627179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.627189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.627434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.627452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.627553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.627567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.627749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-07-15 17:08:51.627762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-07-15 17:08:51.627921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.627932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.628091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.628106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.628253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.628270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.628385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.628401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.628518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.628529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.628701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.628712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.628870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.628885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.629059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.629076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.629244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.629257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.629352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.629363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.629524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.629537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.629706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.629722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.629848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.629864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.630121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.630134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.630247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.630259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.630363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.630373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.630536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.630551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.630786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.630799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.630927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.630941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.631033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.631043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.631143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.631158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.631328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.631345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.631511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.631524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.631717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.631728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.631824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.631836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.631994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.632009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.632138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.632153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.632339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.632352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.632433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.632442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.632597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.632607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.632830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.632850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.633085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.633098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.633257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.633268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.633430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.633446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.633639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.633655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.633761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.633774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.633872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.633882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.634046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.634056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.634306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.634323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.634555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.634568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.634678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.634689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.634871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.634885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.634994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.635009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.635186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.635201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.635483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.635496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.635657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.635672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.635841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.635856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.635989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.636001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.636178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.636188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.636294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.636305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.636470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.636489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.636681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.636697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.636798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.636809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.636963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.636973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.637198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-07-15 17:08:51.637213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-07-15 17:08:51.637424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.637438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.637614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.637625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.637852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.637867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.638035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.638051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.638310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.638323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.638479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.638489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.638668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.638684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.638886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.638899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.639075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.639086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.639195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.639205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.639388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.639404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.639515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.639530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.639631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.639644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.639802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.639812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.639906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.639916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.640043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.640058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.640245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.640262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.640411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.640424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.640574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.640584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.640757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.640767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.641024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.641041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.641273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.641285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.641460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.641471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.641647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.641663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.641894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.641907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.642149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.642159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.642239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.642253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.642430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.642446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.642619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.642632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.642801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.642812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.642919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.642930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.643180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.643196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.643334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.643346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.643581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.643591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.643764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.643780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.643889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.643907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.644097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.644110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.644193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.644203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.644410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.644420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.644671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.644688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.644810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.644823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.645004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.645014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.645123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.645133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.645312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.645328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.645592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.645605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.645775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.645785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.645892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.645904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.646011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.646026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.646194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.646210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.646357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.646370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.646540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.646550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.646830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.646846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.647031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.647044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-07-15 17:08:51.647166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-07-15 17:08:51.647176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.647348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.647359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.647467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.647482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.647649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.647664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.647838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.647850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.647973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.647983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.648091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.648101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.648284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.648300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.648480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.648493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.648657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.648668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.648760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.648769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.648854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.648865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.648996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.649011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.649257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.649271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.649496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.649506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.649673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.649689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.649883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.649899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.650019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.650030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.650212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.650223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.650330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.650342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.650596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.650612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.650737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.650749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.650922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.650937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.651035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.651045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.651172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.651187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.651356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.651372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.651509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.651522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.651683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.651693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.651861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.651877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.652057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.652072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.652269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.652282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.652385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.652396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.652508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.652518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.652700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.652715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.652848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.652864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.653036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.653048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.653234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.653245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.653362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.653377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.653616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.653632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.653804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.653816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.653974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.653985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.654146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.654161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.654334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.654350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.654531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.654543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.654701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.654712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.654829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.654842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.655024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.655040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.655302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.655315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.655488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.655499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.655610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.655626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.655794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.655810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.655930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.655945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.656056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.656066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.656201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.656210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.656393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.656409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-07-15 17:08:51.656590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-07-15 17:08:51.656606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.656810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.656823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.656989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.656999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.657104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.657119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.657372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.657389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.657524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.657535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.657646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.657656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.657835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.657853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.658136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.658152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.658267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.658277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.658367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.658376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.658466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.658476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.658638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.658654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.658833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.658848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.659027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.659039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.659143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.659153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.659257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.659271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.659453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.659468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.659592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.659608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.659868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.659879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.660078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.660093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.660223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.660242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.660371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.660385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.660551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.660562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.660658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.660668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.660848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.660863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.661047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.661062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.661182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.661194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.661454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.661465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.661668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.661685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.661844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.661856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.662035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.662046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.662219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.662237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.662482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.662498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.662611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.662626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.662852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.662862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.663086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.663102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.663290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.663306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.663559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.663569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.663743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.663759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.663866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.663882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.664011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.664026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.664223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.664238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.664333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.664343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.664505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.664520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.664691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.664706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.664871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.664883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.664993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.665003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.665261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.665277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-07-15 17:08:51.665397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-07-15 17:08:51.665412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.665590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.665603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.665721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.665731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.665910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.665921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.666034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.666050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.666283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.666297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.666477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.666487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.666658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.666671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.666838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.666854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.666972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.666987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.667241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.667254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.667380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.667390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.667591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.667607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.667723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.667738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.667998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.668010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.668133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.668146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.668342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.668359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.668474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.668490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.668670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.668681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.668760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.668770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.668930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.668945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.669183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.669199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.669324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.669336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.669438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.669449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.669644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.669659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.669832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.669851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.670106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.670119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.670289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.670300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.670491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.670506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.670638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.670653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.670874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.670886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.671139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.671154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.671388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.671405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.671668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.671680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.671947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.671962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.672081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.672097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.672349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.672362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.672465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.672476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.672640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.672655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.672773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.672788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.672898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.672913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.673096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.673109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.673236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.673246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.673470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.673485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.673691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.673707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.673829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.673839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.674042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.674052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.674221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.674240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.674365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.674380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.674558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.674570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.674681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.674691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.674864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.674874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.675035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.675046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.675156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.675166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.675270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.675283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.205 [2024-07-15 17:08:51.675458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.205 [2024-07-15 17:08:51.675468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.205 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.675631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.675641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.675748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.675758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.675861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.675871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.676100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.676110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.676361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.676371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.676564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.676574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.676755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.676765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.676926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.676936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.677104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.677114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.677286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.677298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.677393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.677402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.677502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.677512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.677678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.677687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.677913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.677923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.678145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.678155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.678246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.678256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.678432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.678443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.678697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.678707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.678817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.678827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.678938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.678948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.679119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.679129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.679243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.679254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.679480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.679490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.679613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.679623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.679798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.679808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.679900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.679909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.680006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.680016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.680172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.680182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.680336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.680346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.680545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.680555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.680744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.680753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.680878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.680888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.681062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.681072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.681245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.681255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.681509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.681520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.681714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.681725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.681836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.681846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.681943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.681953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.682061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.682071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.682164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.682173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.682419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.682429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.682650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.682661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.682924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.682934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.683044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.683053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.683277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.683287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.683538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.683547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.683703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.683713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.683819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.683829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.683989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.683999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.684166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.684177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.684400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.684410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.684647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.684657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.684834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.684844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.685002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.685012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.206 [2024-07-15 17:08:51.685109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.206 [2024-07-15 17:08:51.685119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.206 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.685343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.685353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.685532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.685542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.685718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.685728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.685898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.685908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.686013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.686023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.686251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.686261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.686430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.686440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.686544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.686554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.686715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.686725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.686890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.686900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.687071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.687081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.687244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.687256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.687430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.687441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.687603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.687613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.687730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.687740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.687935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.687946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.688046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.688056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.688249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.688260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.688349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.688358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.688449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.688459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.688559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.688569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.688746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.688756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.688951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.688962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.689120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.689130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.689235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.689245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.689317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.689327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.689427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.689436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.689665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.689675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.689941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.689950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.690045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.690057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.690250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.690260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.690380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.690390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.690500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.690510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.690681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.690691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.690850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.690863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.690965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.690976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.691146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.691156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.691346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.691357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.691457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.691470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.691627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.691637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.691751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.691761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.691928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.691938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.692051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.692061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.692179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.692189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.692288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.692298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.692403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.692413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.692515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.692525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.692772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.692782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.207 [2024-07-15 17:08:51.692968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.207 [2024-07-15 17:08:51.692978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.207 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.693204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.693214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.693340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.693350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.693547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.693557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.693808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.693818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.694045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.694055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.694163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.694173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.694287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.694298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.694479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.694489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.694606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.694617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.694733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.694743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.694860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.694870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.694964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.694989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.695089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.695099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.695243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.695254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.695356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.695366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.695533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.695543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.695637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.695646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.695895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.695905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.696063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.696073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.696295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.696306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.696463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.696473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.696641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.696651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.696857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.696867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.697090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.697099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.697210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.697220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.697376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.697388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.697519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.697529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.697628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.697639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.697814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.697824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.697912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.697922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.698016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.698026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.698122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.698132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.698290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.698300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.698409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.698420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.698512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.698522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.698680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.698691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.698916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.698926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.699093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.699103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.699212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.699222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.699339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.699349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.699536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.699546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.699652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.699662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.699805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.699814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.699962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.699972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.700146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.700156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.700382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.700392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.700499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.700508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.700761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.700771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.700950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.700960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.701136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.701146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.701326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.701336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.701442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.701452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.701614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.701624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.701722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.701732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.208 [2024-07-15 17:08:51.701888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.208 [2024-07-15 17:08:51.701898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.208 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.702075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.702085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.702189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.702198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.702397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.702407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.702518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.702528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.702632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.702642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.702830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.702839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.702994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.703004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.703275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.703285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.703444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.703454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.703572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.703582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.703690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.703702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.703817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.703827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.704013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.704023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.704124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.704134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.704305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.704315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.704540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.704550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.704642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.704652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.704824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.704834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.705007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.705017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.705175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.705185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.705277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.705287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.705398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.705408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.705574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.705584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.705704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.705714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.705819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.705829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.705935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.705945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.706061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.706072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.706230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.706241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.706469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.706479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.706641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.706651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.706824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.706834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.706996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.707006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.707242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.707252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.707473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.707484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.707682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.707692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.707916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.707926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.708044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.708055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.708235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.708246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.708493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.708503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.708683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.708692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.708810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.708820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.708923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.708934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.709110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.709121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.709239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.709250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.709435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.709445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.709603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.709613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.709714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.709724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.710002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.710012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.710135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.710145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.710315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.710326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.710485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.710496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.710723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.710733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.710840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.710850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.710965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.209 [2024-07-15 17:08:51.710975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.209 qpair failed and we were unable to recover it. 00:26:45.209 [2024-07-15 17:08:51.711149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.711159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.711316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.711326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.711502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.711512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.711734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.711744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.711990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.712000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.712119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.712129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.712327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.712337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.712441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.712451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.712626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.712636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.712733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.712744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.712937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.712947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.713056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.713066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.713238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.713248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.713468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.713478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.713709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.713719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.713976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.713986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.714086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.714096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.714188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.714198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.714396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.714407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.714657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.714667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.714866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.714876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.714989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.714999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.715092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.715102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.715278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.715288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.715394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.715404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.715523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.715534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.715813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.715823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.716065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.716075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.716140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.716149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.716267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.716278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.716526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.716536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.716675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.716685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.716866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.716876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.716986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.716996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.717194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.717204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.717303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.717313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.717513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.717525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.717719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.717729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.717950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.717960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.718122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.718132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.718264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.718275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.718392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.718402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.718622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.718632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.718834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.718845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.718999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.719009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.719256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.719266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.719370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.719380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.719481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.719491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.719603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.210 [2024-07-15 17:08:51.719612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.210 qpair failed and we were unable to recover it. 00:26:45.210 [2024-07-15 17:08:51.719711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.719721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.719830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.719841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.720065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.720075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.720299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.720309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.720462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.720472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.720578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.720588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.720688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.720698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.720925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.720935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.721100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.721110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.721360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.721370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.721475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.721485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.721654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.721664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.721821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.721831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.722020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.722031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.722253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.722264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.722418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.722428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.722542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.722553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.722766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.722776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.722964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.722974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.723146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.723156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.723342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.723353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.723465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.723475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.723578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.723588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.723746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.723756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.723866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.723876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.724140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.724150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.724264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.724275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.724519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.724531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.724700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.724709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.724934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.724944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.725232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.725242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.725416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.725426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.725605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.725615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.725706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.725716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.725871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.725881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.725989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.725998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.726093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.726103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.726258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.726268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.726436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.726446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.726600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.726610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.726721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.726731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.726929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.726940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.727039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.727049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.727149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.727160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.727250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.727260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.727433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.727442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.727535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.727545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.727711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.727721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.727894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.727904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.728083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.728093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.728251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.728262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.728435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.728446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.728565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.728575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.728691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.728702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.211 qpair failed and we were unable to recover it. 00:26:45.211 [2024-07-15 17:08:51.728801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.211 [2024-07-15 17:08:51.728812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.728979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.728990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.729210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.729220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.729378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.729389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.729485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.729495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.729599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.729609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.729833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.729843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.730005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.730015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.730251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.730262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.730510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.730520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.730646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.730656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.730767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.730777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.730960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.730970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.731065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.731077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.731260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.731270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.731504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.731514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.731635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.731645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.731867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.731877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.732054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.732064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.732308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.732318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.732383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.732393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.732558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.732568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.732742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.732753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.732923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.732934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.733077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.733087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.733266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.733276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.733448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.733458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.733580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.733590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.733696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.733706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.733812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.733822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.734063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.734073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.734336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.734347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.734537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.734547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.734724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.734734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.734958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.734968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.735086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.735096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.735294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.735304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.735475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.735485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.735575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.735585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.735808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.735818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.736015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.736046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.736292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.736308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.736494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.736508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.736689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.736703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.736823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.736837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.737016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.737029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.737134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.737149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.737267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.737282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.737526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.737540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.737752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.737767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.738023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.738037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.738211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.738229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.738401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.738415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.738537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.738557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.738731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.738746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.212 qpair failed and we were unable to recover it. 00:26:45.212 [2024-07-15 17:08:51.738919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.212 [2024-07-15 17:08:51.738932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.739100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.739114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.739189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.739203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.739386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.739402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.739518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.739532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.739714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.739730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.739844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.739858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.739970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.739984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.740175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.740189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.740285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.740299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.740554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.740568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.740696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.740710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.740833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.740847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.741009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.741022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.741147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.741160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.741334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.741348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.741519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.741532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.741739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.741752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.741861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.741874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.741994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.742007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.742189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.742203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.742394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.742408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.742526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.742539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.742788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.742803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.742970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.742984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d44000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.743176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.743188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.743295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.743305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.743458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.743468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.743587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.743597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.743696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.743707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.743814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.743824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.744066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.744076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.744189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.744199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.744310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.744331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.744501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.744511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.744674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.744685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.744784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.744795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.744966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.744976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.745093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.745106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.745274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.745285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.745392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.745403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.745511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.745522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.745612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.745623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.745725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.745735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.745958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.745968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.746069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.746079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.746186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.746197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.746390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.746401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.213 [2024-07-15 17:08:51.746576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.213 [2024-07-15 17:08:51.746586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.213 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.746696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.746706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.746876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.746886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.747140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.747150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.747266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.747276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.747432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.747442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.747548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.747558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.747655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.747665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.747763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.747773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.747930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.747940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.748052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.748061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.748170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.748181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.748287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.748297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.748545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.748556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.748667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.748679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.748838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.748848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.749006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.749016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.749143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.749153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.749276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.749286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.749464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.749475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.749578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.749588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.749692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.749703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.749806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.749816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.749911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.749921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.750085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.750095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.750196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.750206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.750344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.750354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.750521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.750532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.750652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.750662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.750764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.750774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.750940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.750950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.751059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.751069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.751137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.751148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.751221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.751241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.751403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.751414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.751656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.751666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.751771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.751781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.751848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.751858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.751952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.751963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.752185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.752196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.752297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.752308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.752411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.752421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.752520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.752531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.752620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.752630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.752753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.752764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.753035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.753045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.753219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.753233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.753310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.753320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.753414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.753424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.753531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.753540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.753715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.753725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.753819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.753829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.753992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.754002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.754161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.754172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.754266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.754277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.214 [2024-07-15 17:08:51.754444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.214 [2024-07-15 17:08:51.754454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.214 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.754682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.754693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.754871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.754883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.754978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.754989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.755081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.755091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.755213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.755223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.755414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.755425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.755594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.755605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.755788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.755798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.755976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.755987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.756174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.756185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.756282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.756293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.756407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.756417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.756524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.756534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.756693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.756703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.756936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.756947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.757126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.757136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.757232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.757243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.757332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.757342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.757407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.757417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.757533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.757543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.757712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.757722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.757820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.757830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.757899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.757909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.758008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.758018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.758129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.758138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.758231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.758241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.758479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.758489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.758585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.758596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.758690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.758700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.758804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.758814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.758906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.758916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.759020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.759030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.759118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.759128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.759214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.759227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.759329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.759340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.759584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.759593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.759754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.759764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.759968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.759978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.760139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.760149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.760311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.760321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.760506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.760517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.760619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.760631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.760780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.760790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.760964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.760974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.761069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.761079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.761171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.761181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.761429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.761439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.761554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.761566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.761734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.761744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.761970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.761980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.762059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.762070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.762160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.762171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.762349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.762360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.762476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.762487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.215 [2024-07-15 17:08:51.762654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.215 [2024-07-15 17:08:51.762666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.215 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.762767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.762778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.762863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.762873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.763046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.763056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.763232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.763243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.763421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.763432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.763531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.763541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.763656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.763666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.763763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.763772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.763945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.763955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.764120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.764131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.764300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.764311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.764475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.764485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.764659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.764669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.764787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.764797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.764910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.764921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.765096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.765106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.765277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.765288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.765387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.765397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.765492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.765502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.765596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.765606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.765776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.765786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.765889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.765900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.766057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.766067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.766170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.766180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.766376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.766388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.766492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.766503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.766754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.766767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.766936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.766946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.767053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.767064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.767237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.767248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.767364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.767374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.767479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.767489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.767733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.767743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.767832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.767842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.767951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.767961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.768125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.768135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.768304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.768314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.768413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.768424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.768595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.768605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.768789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.768799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.768911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.768921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.769112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.769123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.769231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.769241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.769409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.769419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.769587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.769598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.769708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.769718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.769901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.769911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.770027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.770037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.216 qpair failed and we were unable to recover it. 00:26:45.216 [2024-07-15 17:08:51.770203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.216 [2024-07-15 17:08:51.770214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.770380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.770389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.770567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.770577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.770670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.770680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.770784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.770793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.770999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.771010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.771215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.771230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.771320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.771331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.771436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.771446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.771616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.771626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.771721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.771731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.771830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.771841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.772011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.772021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.772193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.772205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.772438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.772450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.772541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.772551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.772727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.772740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.772901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.772912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.773014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.773027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.773139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.773149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.773304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.773314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.773422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.773432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.773497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.773507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.773605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.773615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.773769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.773779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.773882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.773892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.774009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.774019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.774197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.774207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.774365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.774375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.774541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.774552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.774740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.774750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.774861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.774872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.775055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.775065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.775298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.775309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.775476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.775486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.775651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.775661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.775782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.775792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.775962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.775974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.776142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.776152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.776259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.776269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.776518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.776528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.776705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.776716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.776876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.776886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.777005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.777016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.777189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.777199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.777386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.777397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.777618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.777628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.777819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.777829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.777933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.777943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.778052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.778062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.778272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.778282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.778458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.778468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.778623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.778633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.778718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.778728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.778899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.778908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.217 [2024-07-15 17:08:51.779035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.217 [2024-07-15 17:08:51.779045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.217 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.779149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.779159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.779260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.779271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.779359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.779371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.779488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.779498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.779594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.779604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.779715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.779725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.779888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.779899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.779992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.780002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.780093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.780103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.780355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.780365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.780520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.780530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.780626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.780636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.780796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.780806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.780885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.780894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.781063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.781073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.781148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.781158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.781316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.781327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.781521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.781531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.781698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.781708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.781811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.781822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.781910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.781920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.782035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.782045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.782139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.782151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.782329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.782339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.782501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.782511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.782668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.782679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.782853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.782863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.783011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.783022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.783179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.783191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.783345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.783355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.783527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.783538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.783711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.783721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.783818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.783829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.784050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.784061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.784176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.784187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.784300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.784311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.784501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.784511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.784715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.784725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.784860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.784870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.785026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.785036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.785138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.785148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.785312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.785323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.785419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.785431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.785609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.785619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.785779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.785789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.785856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.785866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.786042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.786053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.786155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.786165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.786264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.786274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.786446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.786456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.786553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.786564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.786666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.786676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.786856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.786867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.218 qpair failed and we were unable to recover it. 00:26:45.218 [2024-07-15 17:08:51.786969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.218 [2024-07-15 17:08:51.786980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.787184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.787195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.787291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.787301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.787460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.787470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.787560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.787570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.787676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.787687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.787849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.787859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.787973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.787983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.788165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.788175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.788247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.788257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.788365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.788375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.788598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.788608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.788710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.788720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.788888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.788899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.789010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.789021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.789127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.789137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.789276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.789287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.789480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.789490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.789735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.789746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.789847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.789857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.790058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.790068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.790172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.790182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.790286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.790297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.790394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.790404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.790528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.790538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.790695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.790706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.790802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.790812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.790926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.790936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.791023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.791034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.791205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.791218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.791387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.791397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.791574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.791584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.791725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.791735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.791825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.791835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.791997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.792008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.792078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.792088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.792190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.792200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.792308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.792319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.792411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.792421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.792509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.792520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.792607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.792617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.792741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.792751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.792926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.792936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.793100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.793111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.793270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.793281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.793373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.793383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.793562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.793572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.793663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.793673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.793828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.793838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.794016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.794026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.794197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.794207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.794399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.794410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.794570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.794580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.794678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.219 [2024-07-15 17:08:51.794688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.219 qpair failed and we were unable to recover it. 00:26:45.219 [2024-07-15 17:08:51.794807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.794818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.795007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.795018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.795214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.795245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.795352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.795367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.795486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.795501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.795609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.795623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.795729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.795743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.795843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.795857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.796051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.796065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.796176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.796190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.796296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.796308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.796398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.796408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.796572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.796582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.796741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.796752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.796853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.796863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.796957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.796969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.797141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.797151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.797264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.797275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.797506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.797516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.797610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.797620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.797727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.797737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.797836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.797846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.798013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.798023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.798182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.798192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.798294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.798304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.798371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.798381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.798469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.798480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.798632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.798644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.798742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.798752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.798913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.798924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.799081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.799092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.799188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.799199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.220 [2024-07-15 17:08:51.799355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.220 [2024-07-15 17:08:51.799366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.220 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.799474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.799485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.799638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.799649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.799748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.799759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.799990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.800000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.800171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.800181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.800367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.800377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.800473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.800483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.800606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.800616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.800706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.800716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.800909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.800921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.801029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.801039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.801196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.801206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.801405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.801417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.801511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.801521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.801611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.801622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.801781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.801791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.801900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.801910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.802007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.802017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.802080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.802090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.802248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.802258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.802436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.802446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.802623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.802633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.802801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.802813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.802957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.802967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.803077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.803087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.803192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.803202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.803364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.803375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.803488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.803499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.803602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.803612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.803780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.803790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.803877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.803886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.804006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.804015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.804199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.804209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.804330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.804340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.804432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.804442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.804539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.804549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.804717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.804728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.804855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.804864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.805035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.805045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.805140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.221 [2024-07-15 17:08:51.805150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.221 qpair failed and we were unable to recover it. 00:26:45.221 [2024-07-15 17:08:51.805246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.805256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.805413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.805424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.805518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.805528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.805628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.805639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.805804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.805814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.805987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.805997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.806164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.806174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.806272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.806282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.806375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.806385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.806612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.806622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.806793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.806803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.806898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.806909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.807066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.807076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.807300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.807311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.807417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.807427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.807549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.807559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.807683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.807693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.807812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.807822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.807990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.808001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.808290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.808301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.808408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.808421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.808513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.808523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.808715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.808727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.808830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.808840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.809019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.809029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.809242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.809253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.809476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.809486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.809643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.809653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.809843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.809853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.810042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.810052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.810210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.810220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.810352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.810362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.810547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.810557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.810725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.810736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.810834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.810844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.810939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.810949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.811105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.811114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.811287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.811297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.811400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.811410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.811514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.811524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.222 [2024-07-15 17:08:51.811690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.222 [2024-07-15 17:08:51.811699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.222 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.811789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.811798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.812050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.812060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.812234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.812244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.812337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.812347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.812447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.812457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.812616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.812625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.812734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.812745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.812900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.812911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.813020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.813030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.813184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.813194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.813308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.813318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.813452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.813462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.813631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.813642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.813733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.813743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.813854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.813864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.813956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.813967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.814082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.814092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.814200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.814209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.814363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.814373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.814531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.814541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.814704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.814714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.814786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.814798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.814907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.814917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.815022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.815033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.815151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.815161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.815315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.815326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.815416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.815426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.815516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.815525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.815622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.815631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.815785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.815795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.815885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.815895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.816038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.816048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.816152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.816162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.816331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.816341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.816434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.816444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.816543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.816553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.816641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.816651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.816744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.816754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.816855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.816865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.816985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.816995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.223 qpair failed and we were unable to recover it. 00:26:45.223 [2024-07-15 17:08:51.817091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.223 [2024-07-15 17:08:51.817101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.817191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.817201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.817310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.817320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.817419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.817429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.817589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.817599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.817822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.817831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.817922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.817932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.818094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.818105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.818205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.818222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.818409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.818423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.818590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.818603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.818712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.818726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.818838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.818852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.818961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.818975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.819159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.819172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.819277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.819291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.819460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.819474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.819589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.819602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.819720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.819733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.819899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.819913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.820103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.820116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.820235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.820254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.820356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.820370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.820483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.820497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.820681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.820695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.820884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.820898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.821011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.821025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.821193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.821206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.821391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.821404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.821557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.821571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.821698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.821712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.821807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.821821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.821924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.821937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.822111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.822125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.224 [2024-07-15 17:08:51.822304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.224 [2024-07-15 17:08:51.822318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.224 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.822550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.822564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.822665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.822678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.822778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.822792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.822964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.822978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.823077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.823090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.823236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.823250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.823375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.823389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.823494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.823508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.823673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.823687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.823812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.823825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.823918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.823928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.824030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.824041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.824148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.824158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.824270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.824280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.824374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.824384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.824473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.824482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.824644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.824654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.824744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.824754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.824842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.824852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.824960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.824970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.825081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.825091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.825228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.825238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.825400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.825410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.825560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.825570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.825643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.825653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.825788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.825798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.825894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.825904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.826068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.826079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.826247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.826257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.826432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.826443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.826536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.826546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.826715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.826725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.826835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.826845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.826944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.826954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.827045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.827055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.827158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.827169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.827330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.827340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.827528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.827538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.827713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.827724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.827818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.225 [2024-07-15 17:08:51.827828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.225 qpair failed and we were unable to recover it. 00:26:45.225 [2024-07-15 17:08:51.827926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.827936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.828040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.828050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.828230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.828240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.828352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.828363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.828459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.828469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.828560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.828570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.828668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.828678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.828781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.828792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.828982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.828992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.829094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.829103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.829210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.829220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.829351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.829361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.829501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.829512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.829766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.829778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.829937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.829947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.830103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.830113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.830210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.830220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.830327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.830337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.830454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.830464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.830642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.830653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.830750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.830759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.830861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.830870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.831081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.831091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.226 [2024-07-15 17:08:51.831189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.226 [2024-07-15 17:08:51.831200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.226 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.831315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.831327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.831445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.831456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.831648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.831659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.831779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.831790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.831884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.831894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.832063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.832073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.832214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.832227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.832419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.832429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.832553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.832563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.832713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.832723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.832968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.832978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.833170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.833181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.833292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.833302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.833405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.833415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.833513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.833524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.833730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.833740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.833864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.833874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.833972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.833982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.834063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.834073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.834174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.834184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.834407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.834417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.834522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.834532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.834625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.834635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.834745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.834755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.834923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.834933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.835010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.835019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.835138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.835149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.835377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.835387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.835481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.835490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.835650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.519 [2024-07-15 17:08:51.835662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.519 qpair failed and we were unable to recover it. 00:26:45.519 [2024-07-15 17:08:51.835824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.835833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.836013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.836023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.836204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.836214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.836350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.836360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.836478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.836488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.836587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.836598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.836709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.836719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.836902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.836913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.837003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.837013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.837122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.837132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.837300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.837310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.837409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.837419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.837512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.837522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.837841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.837851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.837920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.837930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.838024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.838033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.838137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.838147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.838269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.838279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.838491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.838502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.838672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.838683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.838787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.838797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.838898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.838908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.839015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.839026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.839114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.839124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.839223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.839238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.839344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.839354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.839534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.839544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.839654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.839663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.839798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.839809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.839901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.839911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.839999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.840008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.840114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.840125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.840222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.840237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.840337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.840347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.840437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.840447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.840622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.840632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.840810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.840820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.840920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.840930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.841029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.841038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.520 [2024-07-15 17:08:51.841191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.520 [2024-07-15 17:08:51.841203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.520 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.841281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.841291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.841487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.841497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.841651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.841661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.841762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.841772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.841873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.841883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.841981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.841992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.842080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.842090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.842212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.842222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.842318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.842328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.842435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.842445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.842529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.842539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.842761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.842772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.842863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.842873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.842981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.842991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.843154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.843164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.843365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.843375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.843484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.843494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.843590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.843599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.843691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.843700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.843818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.843828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.843966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.843976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.844067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.844077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.844187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.844198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.844370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.844380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.844548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.844557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.844658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.844669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.844844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.844855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.845061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.845071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.845241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.845252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.845450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.845461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.845553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.845562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.845721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.845731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.845854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.845864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.845975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.845986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.846080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.846090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.846196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.846206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.846313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.846324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.846573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.846583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.846679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.846688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.846793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.521 [2024-07-15 17:08:51.846805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.521 qpair failed and we were unable to recover it. 00:26:45.521 [2024-07-15 17:08:51.846970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.846980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.847068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.847078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.847243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.847253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.847363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.847373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.847467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.847477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.847589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.847599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.847698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.847709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.847823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.847833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.847992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.848002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.848167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.848177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.848282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.848292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.848417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.848427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.848586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.848596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.848706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.848716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.848818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.848828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.848992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.849002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.849158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.849168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.849282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.849292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.849383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.849393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.849487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.849497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.849601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.849611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.849718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.849728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.849891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.849901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.849992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.850003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.850121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.850132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.850360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.850370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.850478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.850488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.850578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.850588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.850732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.850742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.850900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.850910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.851074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.851085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.851204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.851214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.851315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.851325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.851425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.851435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.851528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.851538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.851764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.851774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.851890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.851900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.852007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.852017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.852206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.852216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.522 [2024-07-15 17:08:51.852390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.522 [2024-07-15 17:08:51.852402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.522 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.852557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.852567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.852674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.852684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.852769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.852779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.852942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.852952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.853057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.853067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.853308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.853319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.853418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.853428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.853544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.853553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.853708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.853718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.853808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.853818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.853911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.853921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.854083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.854093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.854262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.854273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.854437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.854447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.854528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.854538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.854638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.854648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.854817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.854827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.854991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.855000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.855166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.855177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.855287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.855297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.855404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.855415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.855526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.855536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.855707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.855718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.855900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.855909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.856019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.856029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.856130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.856140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.856252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.856262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.856426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.856436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.856544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.856554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.856715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.856725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.856832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.856842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.857005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.857014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.857195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.857205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.523 [2024-07-15 17:08:51.857296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.523 [2024-07-15 17:08:51.857306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.523 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.857476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.857486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.857593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.857603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.857768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.857778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.857886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.857896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.858006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.858016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.858177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.858188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.858369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.858380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.858549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.858559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.858658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.858668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.858833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.858843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.859016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.859026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.859143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.859154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.859243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.859254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.859341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.859351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.859473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.859483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.859654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.859664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.859765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.859775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.859874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.859884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.860011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.860021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.860141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.860151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.860316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.860326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.860496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.860505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.860667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.860677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.860811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.860821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.860921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.860931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.861043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.861053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.861214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.861228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.861393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.861403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.861503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.861513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.861672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.861682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.861847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.861857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.861970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.861980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.862216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.862246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.862475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.862485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.862593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.862602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.862701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.862711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.862871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.862881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.863032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.863042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.863141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.863152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.863325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.524 [2024-07-15 17:08:51.863336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.524 qpair failed and we were unable to recover it. 00:26:45.524 [2024-07-15 17:08:51.863455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.863465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.863572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.863582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.863739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.863748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.863941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.863951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.864188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.864198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.864305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.864317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.864413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.864423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.864605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.864615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.864735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.864746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.864913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.864922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.865033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.865044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.865207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.865218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.865319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.865329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.865488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.865498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.865597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.865607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.865697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.865707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.865809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.865819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.865933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.865943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.866048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.866058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.866257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.866267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.866389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.866399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.866664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.866675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.866835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.866846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.866936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.866946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.867105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.867116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.867201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.867211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.867328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.867338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.867514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.867524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.867636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.867646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.867736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.867746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.867870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.867881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.868058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.868068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.868173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.868184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.868284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.868294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.868517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.868528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.868652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.868662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.868760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.868773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.868892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.868902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.869154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.869164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.869261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.869271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.525 [2024-07-15 17:08:51.869378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.525 [2024-07-15 17:08:51.869389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.525 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.869504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.869515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.869608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.869618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.869725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.869735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.869823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.869833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.869909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.869921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.870093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.870103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.870194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.870204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.870310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.870321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.870520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.870530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.870699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.870709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.870805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.870815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.870922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.870932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.871098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.871109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.871200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.871210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.871326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.871337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.871432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.871442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.871540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.871550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.871639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.871649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.871754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.871765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.871870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.871880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.871969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.871979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.872066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.872077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.872249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.872260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.872379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.872389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.872480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.872491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.872590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.872601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.872692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.872702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.872864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.872874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.872965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.872975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.873122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.873132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.873385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.873396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.873580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.873609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.873785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.873801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.873970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.873984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.874127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.874141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.874328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.874344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.874554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.874568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.874740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.874754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.874854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.874867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.874982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.526 [2024-07-15 17:08:51.874995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.526 qpair failed and we were unable to recover it. 00:26:45.526 [2024-07-15 17:08:51.875109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.875123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.875299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.875314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.875432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.875445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.875578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.875592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.875695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.875709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.875905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.875919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.876036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.876049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.876222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.876241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.876480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.876493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.876601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.876615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.876813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.876826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.876926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.876941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.877129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.877143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.877246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.877259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.877492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.877506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.877611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.877625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.877737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.877751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.877881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.877895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.878051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.878068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.878174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.878188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.878370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.878383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.878487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.878501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.878657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.878672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.878837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.878850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.878952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.878965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.879138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.879152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.879265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.879279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.879388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.879402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.879513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.879527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.879705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.879719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.879823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.879837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.879956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.879970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.880071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.880084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.880195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.880209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.880432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.880447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.527 [2024-07-15 17:08:51.880522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.527 [2024-07-15 17:08:51.880536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.527 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.880657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.880671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.880784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.880798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.880999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.881014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.881183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.881197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.881361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.881375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.881501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.881515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.881690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.881704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.881777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.881790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.881963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.881976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.882162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.882181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.882292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.882306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.882469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.882482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.882559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.882573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.882680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.882694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.882805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.882819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.882990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.883004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.883122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.883136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.883249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.883265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.883364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.883378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.883490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.883504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.883601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.883615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.883731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.883745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.883865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.883878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.883997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.884017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.884198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.884216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.884332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.884349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.884457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.884473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.884645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.884655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.884746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.884756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.885000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.885017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.885139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.885154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.885272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.885285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.885454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.885465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.885572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.885583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.528 qpair failed and we were unable to recover it. 00:26:45.528 [2024-07-15 17:08:51.885767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.528 [2024-07-15 17:08:51.885777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.885878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.885888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.885988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.886001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.886117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.886127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.886243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.886253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.886351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.886361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.886456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.886467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.886575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.886586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.886677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.886687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.886782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.886792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.886902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.886912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.887004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.887015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.887106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.887116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.887211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.887221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.887327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.887338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.887562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.887572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.887737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.887747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.887840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.887850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.887962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.887971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.888130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.888141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.888253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.888263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.888376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.888386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.888532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.888542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.888644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.888655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.888743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.888753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.888938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.888948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.889044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.889054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.889210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.889220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.889367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.889378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.889575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.889592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.889727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.889742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.889855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.889869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.890035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.890049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.890156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.890170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.890292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.890306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.890412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.890426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.890539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.529 [2024-07-15 17:08:51.890552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.529 qpair failed and we were unable to recover it. 00:26:45.529 [2024-07-15 17:08:51.890654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.890667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.890835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.890849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.891019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.891032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.891108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.891121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.891244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.891256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.891365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.891375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.891466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.891477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.891662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.891672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.891835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.891846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.891937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.891947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.892036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.892048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.892162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.892172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.892396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.892407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.892567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.892577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.892688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.892697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.892876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.892886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.892997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.893007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.893181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.893192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.893296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.893307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.893472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.893482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.893548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.893558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.893694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.893704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.893802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.893812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.893909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.893919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.894018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.894028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.894142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.894152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.894270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.894281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.894387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.894397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.894590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.894600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.894697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.894706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.894869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.894879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.895040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.895050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.895297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.895311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.895472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.895483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.895578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.895588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.530 [2024-07-15 17:08:51.895756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.530 [2024-07-15 17:08:51.895767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.530 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.895953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.895963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.896068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.896078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.896171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.896182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.896306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.896316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.896411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.896421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.896523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.896533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.896641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.896651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.896743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.896752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.896869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.896880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.896992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.897003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.897106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.897116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.897276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.897286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.897447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.897457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.897642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.897652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.897866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.897876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.897982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.897993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.898096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.898106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.898234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.898244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.898323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.898333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.898432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.898442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.898595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.898605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.898881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.898892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.898984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.898994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.899107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.899117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.899229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.899240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.899319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.899332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.899438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.899448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.899540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.899550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.899670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.899680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.899784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.899793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.899896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.899908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.900066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.900076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.900325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.900336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.900494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.531 [2024-07-15 17:08:51.900504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.531 qpair failed and we were unable to recover it. 00:26:45.531 [2024-07-15 17:08:51.900597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.900607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.900728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.900739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.900843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.900855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.900950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.900960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.901232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.901243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.901376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.901386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.901643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.901654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.901758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.901768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.901847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.901857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.901954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.901964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.902124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.902133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.902303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.902313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.902480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.902490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.902656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.902666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.902775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.902785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.902882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.902893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.903011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.903021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.903105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.903115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.903204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.903214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.903390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.903400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.903498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.903507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.903672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.903681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.903840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.903851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.904020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.904030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.904147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.904157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.904262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.904272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.904387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.904398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.904558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.532 [2024-07-15 17:08:51.904568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.532 qpair failed and we were unable to recover it. 00:26:45.532 [2024-07-15 17:08:51.904729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.904739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.904833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.904844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.905000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.905011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.905126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.905136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.905276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.905286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.905449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.905460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.905564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.905574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.905733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.905743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.905844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.905854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.905955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.905965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.906063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.906073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.906185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.906196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.906313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.906324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.906481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.906490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.906591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.906609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.906733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.906742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.906898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.906907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.907020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.907030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.907132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.907142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.907373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.907384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.907494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.907504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.907597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.907607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.907766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.907776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.907938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.907948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.908101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.908111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.908215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.908229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.908313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.908323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.908423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.908434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.908528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.908538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.908628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.908638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.908795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.908805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.908893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.908903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.908995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.909006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.909165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.909174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.909399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.909410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.909500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.909510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.533 [2024-07-15 17:08:51.909675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.533 [2024-07-15 17:08:51.909685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.533 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.909769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.909779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.909876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.909886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.910004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.910015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.910115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.910125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.910304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.910314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.910515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.910525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.910697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.910707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.910848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.910858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.911022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.911032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.911130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.911140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.911238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.911249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.911421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.911431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.911537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.911546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.911662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.911673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.911788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.911798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.911901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.911911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.912005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.912015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.912185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.912197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.912362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.912372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.912527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.912537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.912624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.912634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.912729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.912739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.912965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.912975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.913098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.913109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.913277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.913288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.913393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.913404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.913564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.913575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.913742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.913751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.913864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.913874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.913981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.913991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.914084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.914094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.914201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.914211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.914300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.914310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.914550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.914560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.914659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.914670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.534 [2024-07-15 17:08:51.914763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.534 [2024-07-15 17:08:51.914773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.534 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.914994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.915004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.915108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.915118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.915371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.915381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.915610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.915621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.915730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.915743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.915861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.915871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.915972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.915982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.916084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.916094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.916273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.916283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.916389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.916399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.916557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.916567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.916726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.916735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.916846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.916856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.916930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.916940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.917031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.917041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.917131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.917142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.917252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.917262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.917346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.917356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.917583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.917593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.917658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.917668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.917767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.917777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.917881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.917894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.917985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.917995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.918059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.918069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.918171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.918181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.918329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.918339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.918429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.918439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.918535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.918544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.918748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.918758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.918903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.918913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.919075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.919085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.919180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.535 [2024-07-15 17:08:51.919190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.535 qpair failed and we were unable to recover it. 00:26:45.535 [2024-07-15 17:08:51.919293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.919303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.919427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.919437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.919547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.919557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.919650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.919660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.919756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.919766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.919860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.919869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.919972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.919982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.920084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.920093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.920323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.920333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.920429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.920439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.920529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.920539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.920693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.920703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.920800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.920810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.920899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.920916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.921009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.921018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.921181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.921191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.921289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.921301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.921405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.921414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.921505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.921515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.921680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.921690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.921779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.921789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.921949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.921959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.922059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.922069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.922174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.922183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.922345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.922355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.922456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.922466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.922565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.922575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.922651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.922661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.922823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.922834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.922929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.922940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.923037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.923047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.923112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.923122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.923288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.923298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.923401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.923412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.923499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.923509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.923669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.923679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.923780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.923790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.923883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.923892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.924005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.924015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.924109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.924119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.924205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.924214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.924377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.924388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.924498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.924509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.924573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.924583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.536 qpair failed and we were unable to recover it. 00:26:45.536 [2024-07-15 17:08:51.924692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.536 [2024-07-15 17:08:51.924702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.924780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.924790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.924919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.924929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.925116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.925126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.925220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.925234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.925389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.925399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.925558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.925568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.925777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.925787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.925968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.925978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.926188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.926198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.926301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.926313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.926515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.926526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.926688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.926700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.926799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.926809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.926901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.926911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.927012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.927022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.927137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.927147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.927315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.927326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.927482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.927492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.927652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.927662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.927910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.927921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.928082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.928093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.928206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.928216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.928320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.928330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.928554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.928564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.928667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.928677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.928854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.928864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.928954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.928963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.929135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.929144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.929268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.929278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.929388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.929398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.929538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.929548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.929644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.929654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.929721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.929731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.929912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.929922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.930023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.930033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.930271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.930281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.930350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.930360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.930525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.930535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.930639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.930648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.930805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.930815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.930894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.930904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.931065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.931075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.931240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.931251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.931411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.931421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.537 [2024-07-15 17:08:51.931638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.537 [2024-07-15 17:08:51.931648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.537 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.931749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.931759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.931957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.931968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.932135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.932145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.932256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.932267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.932338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.932348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.932431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.932441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.932529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.932541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.932632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.932642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.932812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.932822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.932908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.932918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.933088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.933097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.933266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.933276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.933364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.933374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.933473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.933484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.933645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.933656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.933781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.933791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.933954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.933965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.934056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.934066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.934155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.934164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.934273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.934283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.934379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.934389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.934491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.934500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.934672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.934682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.934770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.934779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.934934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.934944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.935042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.935052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.935135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.935145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.935394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.935405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.935512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.935522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.935604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.935614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.935774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.935784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.935880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.935890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.936064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.936074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.936146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.936155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.936268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.936279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.936437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.936447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.936519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.936529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.936636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.936645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.936739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.936750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.936858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.936868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.937029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.937039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.937197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.937207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.937372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.937384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.937614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.937625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.937726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.937736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.937844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.937854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.937990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.938002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.938119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.938130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.938243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.938253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.938407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.938416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.938517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.938528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.938617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.938627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.938731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.538 [2024-07-15 17:08:51.938741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.538 qpair failed and we were unable to recover it. 00:26:45.538 [2024-07-15 17:08:51.938834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.938844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.939014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.939025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.939127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.939137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.939309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.939320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.939478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.939488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.939585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.939595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.939689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.939698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.939859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.939868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.939968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.939978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.940153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.940163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.940258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.940269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.940371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.940382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.940546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.940557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.940641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.940651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.940721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.940731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.940832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.940842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.941009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.941020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.941181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.941191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.941306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.941317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.941472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.941482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.941591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.941602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.941759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.941769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.941928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.941938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.942093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.942103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.942212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.942222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.942318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.942327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.942427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.942438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.942574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.942584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.942699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.942709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.942861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.942870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.943030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.943040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.943137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.943147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.943307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.943317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.943402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.943414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.943512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.943523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.943613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.943623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.943733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.943743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.943862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.943872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.943960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.943970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.944157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.944167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.944267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.944277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.944435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.944445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.944542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.944552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.944669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.944679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.944845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.944856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.944973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.944985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.945115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.945125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.945219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.945234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.945349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.945359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.945431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.945441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.945636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.945646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.945783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.945793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.945887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.945898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.946015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.946024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.539 qpair failed and we were unable to recover it. 00:26:45.539 [2024-07-15 17:08:51.946201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.539 [2024-07-15 17:08:51.946212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.946374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.946384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.946566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.946577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.946752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.946761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.946855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.946864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.946948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.946959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.947074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.947085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.947193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.947203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.947365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.947376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.947439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.947449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.947508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.947518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.947610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.947620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.947706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.947716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.947835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.947845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.947940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.947949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.948054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.948064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.948157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.948167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.948262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.948273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.948449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.948459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.948555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.948567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.948685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.948695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.948789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.948799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.948913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.948923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.949026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.949036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.949128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.949139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.949301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.949311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.949441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.949451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.949611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.949621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.949695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.949705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.949800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.949810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.949904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.949914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.950075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.950085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.950260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.950270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.950366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.950377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.950471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.950481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.950571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.950581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.950681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.950691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.950862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.950872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.950955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.950964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.951066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.951076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.951241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.951252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.951344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.951354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.951511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.951521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.951702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.951712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.951820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.951830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.951923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.951933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.952117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.952128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.952238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.952249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.952416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.540 [2024-07-15 17:08:51.952426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.540 qpair failed and we were unable to recover it. 00:26:45.540 [2024-07-15 17:08:51.952596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.952606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.952727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.952737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.952895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.952905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.952999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.953009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.953117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.953127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.953214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.953228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.953391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.953401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.953518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.953528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.953621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.953631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.953706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.953716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.953895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.953908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.954091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.954101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.954278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.954288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.954381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.954391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.954490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.954499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.954666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.954676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.954795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.954805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.954985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.954995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.955101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.955111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.955266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.955277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.955384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.955394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.955549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.955559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.955653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.955663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.955754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.955764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.955830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.955840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.955942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.955952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.956027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.956037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.956266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.956277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.956368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.956378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.956497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.956507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.956677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.956688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.956793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.956802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.956893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.956903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.957134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.957144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.957273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.957283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.957439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.957449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.957555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.957565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.957736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.957746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.957925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.957935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.958036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.958046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.958152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.958162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.958268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.958279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.958377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.958387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.958480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.958489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.958664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.958674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.958767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.958777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.959007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.959018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.959125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.959136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.959319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.959330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.959488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.959498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.959591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.959602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.959694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.959704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.959807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.959817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.959899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.541 [2024-07-15 17:08:51.959909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.541 qpair failed and we were unable to recover it. 00:26:45.541 [2024-07-15 17:08:51.960021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.960030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.960204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.960214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.960313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.960323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.960405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.960415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.960583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.960593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.960748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.960758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.960878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.960889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.960983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.960994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.961087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.961097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.961268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.961279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.961383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.961393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.961577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.961587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.961712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.961722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.961803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.961813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.961966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.961976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.962066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.962076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.962167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.962177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.962403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.962413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.962579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.962590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.962668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.962678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.962783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.962793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.962890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.962900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.963006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.963016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.963130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.963140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.963236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.963247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.963400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.963410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.963573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.963583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.963691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.963701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.963869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.963879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.963986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.963997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.964157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.964168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.964337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.964347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.964448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.964458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.964562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.964572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.964676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.964686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.964864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.964874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.964942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.964953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.965043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.965052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.965209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.965219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.965339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.965349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.965441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.965450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.965624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.965634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.965719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.965728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.965822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.965832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.965920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.965930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.966025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.966036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.966208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.966217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.966288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.966298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.966473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.966483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.966659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.966669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.966876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.966886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.966988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.966998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.967156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.967166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.967320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.967330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.542 [2024-07-15 17:08:51.967427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.542 [2024-07-15 17:08:51.967438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.542 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.967510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.967520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.967644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.967655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.967843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.967853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.967961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.967971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.968130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.968140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.968315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.968326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.968426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.968435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.968526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.968537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.968716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.968726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.968835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.968845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.969039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.969049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.969136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.969145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.969309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.969319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.969422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.969432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.969550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.969560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.969658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.969667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.969759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.969769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.969878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.969889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.969985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.969995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.970165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.970175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.970266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.970277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.970384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.970396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.970500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.970510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.970727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.970737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.970850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.970860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.971064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.971074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.971253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.971264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.971359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.971369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.971466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.971476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.971569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.971580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.971687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.971697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.971796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.971807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.971899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.971909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.972087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.972098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.972199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.972209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.972418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.972429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.972542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.972552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.972661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.972672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.972764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.972773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.972881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.972892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.973023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.973032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.973139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.973149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.973259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.973269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.973359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.973369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.973463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.973473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.973582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.973591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.973660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.973670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.543 [2024-07-15 17:08:51.973890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.543 [2024-07-15 17:08:51.973901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.543 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.973997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.974007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.974107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.974117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.974202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.974212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.974376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.974388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.974558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.974568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.974646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.974656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.974742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.974752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.974888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.974898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.974988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.974998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.975151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.975161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.975364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.975374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.975469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.975479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.975650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.975661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.975826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.975838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.975938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.975949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.976175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.976185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.976353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.976364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.976464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.976474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.976642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.976653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.976761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.976771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.976877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.976888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.976986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.976996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.977196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.977206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.977325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.977336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.977504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.977515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.977621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.977632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.977729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.977739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.977818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.977828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.977963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.977973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.978079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.978089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.978284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.978295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.978388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.978399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.978478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.978488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.978609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.978620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.978708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.978719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.978816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.978826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.978915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.978926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.979164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.979175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.979268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.979278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.979459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.979470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.979562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.979573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.979652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.979662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.979771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.979781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.979871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.979881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.979975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.979985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.980085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.980095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.980196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.980206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.980324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.980336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.980402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.980412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.980515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.980525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.980685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.980695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.980788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.980798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.980899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.980909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.981069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.544 [2024-07-15 17:08:51.981082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.544 qpair failed and we were unable to recover it. 00:26:45.544 [2024-07-15 17:08:51.981182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.981192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.981287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.981299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.981399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.981410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.981519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.981529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.981623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.981634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.981879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.981890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.981983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.981994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.982082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.982094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.982172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.982183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.982286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.982297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.982405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.982415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.982503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.982513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.982761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.982772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.982881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.982892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.983049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.983060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.983220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.983237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.983331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.983342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.983436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.983447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.983608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.983619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.983789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.983800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.983954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.983965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.984099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.984109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.984210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.984220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.984322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.984333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.984490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.984500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.984594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.984604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.984742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.984753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.984840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.984851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.984957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.984968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.985126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.985137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.985255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.985266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.985378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.985389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.985501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.985512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.985671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.985681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.985779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.985789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.985881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.985891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.985981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.985991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.986087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.986097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.986200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.986211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.986386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.986400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.986510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.986521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.986610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.986621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.986756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.986767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.986876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.986886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.986994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.987005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.987112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.987123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.987222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.987236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.987334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.987345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.987453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.987463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.987559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.987570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.987739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.987750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.987841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.987851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.988022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.988033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.988195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.545 [2024-07-15 17:08:51.988206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.545 qpair failed and we were unable to recover it. 00:26:45.545 [2024-07-15 17:08:51.988328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.988340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.988439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.988449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.988589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.988599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.988704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.988716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.988807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.988817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.988951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.988961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.989074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.989085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.989188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.989199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.989305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.989316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.989481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.989492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.989597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.989608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.989700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.989711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.989807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.989818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.989910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.989921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.990012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.990025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.990117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.990127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.990220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.990235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.990337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.990348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.990441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.990451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.990575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.990586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.990720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.990731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.990818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.990828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.990935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.990946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.991050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.991061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.991152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.991162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.991262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.991272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.991367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.991377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.991559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.991569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.991661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.991671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.991759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.991769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.991866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.991876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.992077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.992088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.992255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.992266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.992375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.992385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.992486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.992498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.992591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.992602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.992714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.992724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.992825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.992835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.992944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.992954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.993046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.993057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.993150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.993161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.993250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.993261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.993371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.993382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.993482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.993492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.993595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.993607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.993768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.993779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.993887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.993897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.993979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.993989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.994107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.994118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.994211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.994221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.994392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.994404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.994559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.994570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.994668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.994681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.546 [2024-07-15 17:08:51.994780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.546 [2024-07-15 17:08:51.994790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.546 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.994881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.994891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.994981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.994992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.995125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.995136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.995298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.995308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.995385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.995394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.995552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.995563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.995675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.995686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.995842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.995853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.995944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.995955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.996145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.996155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.996316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.996327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.996420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.996431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.996524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.996536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.996625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.996635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.996737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.996748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.996902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.996913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.997024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.997034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.997117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.997127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.997246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.997257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.997488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.997499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.997603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.997613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.997781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.997792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.997885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.997895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.997986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.997996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.998084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.998094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.998188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.998199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.998319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.998330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.998415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.998426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.998526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.998537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.998691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.998701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.998927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.998938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.999039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.999051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.999120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.999131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.999287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.999298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.999461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.999473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.999627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.999637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.999761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.999772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.999866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.999877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:51.999980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:51.999992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:52.000105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:52.000116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:52.000296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:52.000308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:52.000486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:52.000497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:52.000657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:52.000667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:52.000763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:52.000774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:52.000915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:52.000926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:52.001041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:52.001052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:52.001154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:52.001165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.547 [2024-07-15 17:08:52.001264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.547 [2024-07-15 17:08:52.001275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.547 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.001429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.001440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.001530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.001540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.001646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.001657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.001815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.001826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.001984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.001995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.002092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.002104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.002206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.002217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.002328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.002339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.002523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.002534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.002630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.002640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.002804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.002815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.002980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.002991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.003078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.003088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.003260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.003271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.003374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.003384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.003571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.003582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.003749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.003760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.003924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.003935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.004062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.004073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.004236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.004247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.004366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.004377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.004480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.004491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.004661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.004672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.004828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.004838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.005062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.005072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.005165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.005176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.005352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.005364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.005537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.005549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.005657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.005667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.005748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.005759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.005863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.005875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.005982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.005992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.006162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.006172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.006290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.006301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.006392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.006403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.006509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.006520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.006682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.006693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.006787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.006798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.006958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.006969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.007087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.007098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.007203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.007213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.007289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.007300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.007403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.007413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.007584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.007594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.007683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.007694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.007793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.007803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.007911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.007921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.008012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.008024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.008184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.008194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.008299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.008310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.008402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.008413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.008500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.008510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.008679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.008690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.008784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.548 [2024-07-15 17:08:52.008795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.548 qpair failed and we were unable to recover it. 00:26:45.548 [2024-07-15 17:08:52.008888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.008900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.009017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.009027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.009129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.009140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.009298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.009309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.009417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.009428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.009589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.009599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.009704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.009715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.009814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.009825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.009916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.009927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.010104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.010115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.010216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.010230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.010330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.010341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.010469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.010481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.010579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.010589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.010675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.010685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.010777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.010788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.010894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.010907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.011004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.011015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.011102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.011113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.011211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.011222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.011327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.011338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.011450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.011461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.011554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.011565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.011744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.011755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.011909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.011919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.012067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.012077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.012164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.012175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.012279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.012289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.012417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.012427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.012594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.012604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.012717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.012728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.012829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.012840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.012905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.012915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.013040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.013050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.013162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.013172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.013345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.013356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.013460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.013471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.013666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.013676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.013787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.013797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.013886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.013896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.014007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.014018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.014119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.014129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.014235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.014245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.014336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.014347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.014524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.014535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.014634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.014645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.014761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.014772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.014940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.014952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.015133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.015144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.015253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.015264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.015371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.015381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.015476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.015487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.015599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.015609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.015699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.015710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.549 qpair failed and we were unable to recover it. 00:26:45.549 [2024-07-15 17:08:52.015803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.549 [2024-07-15 17:08:52.015814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.015904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.015914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.016025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.016037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.016131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.016141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.016253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.016263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.016363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.016373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.016462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.016471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.016646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.016657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.016748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.016759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.016860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.016870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.016959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.016969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.017079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.017090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.017243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.017254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.017408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.017419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.017580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.017591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.017691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.017702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.017812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.017823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.017937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.017948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.018037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.018047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.018148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.018158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.018270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.018281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.018382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.018393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.018570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.018580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.018753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.018764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.018905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.018916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.019006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.019016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.019103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.019113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.019305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.019315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.019541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.019552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.019659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.019669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.019838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.019849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.019959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.019970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.020083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.020093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.020188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.020198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.020361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.020372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.020540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.020551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.020650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.020659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.020800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.020811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.020986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.020997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.021091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.021102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.021283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.021294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.021401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.021411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.021507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.021520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.021688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.021698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.021806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.021817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.021930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.021941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.022035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.022047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.022134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.022144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.550 qpair failed and we were unable to recover it. 00:26:45.550 [2024-07-15 17:08:52.022248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.550 [2024-07-15 17:08:52.022258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.022359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.022371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.022469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.022479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.022569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.022579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.022734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.022745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.022856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.022867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.022955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.022966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.023072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.023082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.023177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.023188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.023276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.023286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.023375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.023386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.023546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.023557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.023676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.023687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.023788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.023798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.023895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.023905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.024008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.024019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.024100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.024111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.024282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.024293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.024464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.024475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.024586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.024597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.024691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.024701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.024824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.024835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.024923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.024934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.025035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.025045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.025117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.025127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.025232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.025243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.025344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.025354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.025449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.025461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.025622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.025633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.025803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.025814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.025910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.025920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.026017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.026028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.026184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.026194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.026313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.026325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.026500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.026514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.026650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.026660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.026750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.026761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.026931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.026941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.027035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.027045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.027142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.027153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.027268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.027278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.027376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.027386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.027494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.027505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.027597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.027608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.027753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.027763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.027919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.027930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.028096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.028107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.028183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.028193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.028357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.028368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.028466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.028476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.028543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.028553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.028662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.028673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.028760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.028771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.028875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.028886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.551 [2024-07-15 17:08:52.029028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.551 [2024-07-15 17:08:52.029038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.551 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.029212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.029227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.029319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.029329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.029423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.029433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.029531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.029542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.029724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.029735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.029906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.029923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.030021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.030040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.030201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.030211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.030316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.030327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.030431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.030441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.030541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.030551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.030711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.030722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.030821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.030832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.030969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.030980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.031068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.031078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.031265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.031276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.031381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.031392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.031487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.031498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.031662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.031672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.031849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.031862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.032031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.032048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.032146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.032156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.032250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.032261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.032431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.032442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.032539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.032549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.032642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.032653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.032772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.032783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.032944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.032954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.033120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.033131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.033203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.033214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.033337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.033363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.033464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.033479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.033549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.033564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.033736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.033752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.033903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.033918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.034167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.034181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.034350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.034365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.034465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.034479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.034648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.034663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.034768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.034783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.034894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.034909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.035019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.035033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.035213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.035235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.035322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.035336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.035461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.035475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.035578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.035592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.035698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.035711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.035834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.035845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.035955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.035966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.036062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.036074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.036188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.036199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.036365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.036376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.036472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.036482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.552 [2024-07-15 17:08:52.036569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.552 [2024-07-15 17:08:52.036579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.552 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.036683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.036693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.036848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.036858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.037016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.037027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.037145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.037156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.037247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.037258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.037422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.037437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.037603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.037613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.037790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.037801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.037889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.037899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.038018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.038029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.038121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.038131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.038223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.038237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.038315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.038325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.038431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.038441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.038529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.038540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.038635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.038646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.038750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.038760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.038942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.038953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.039052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.039063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.039156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.039167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.039334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.039345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.039424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.039435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.039536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.039547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.039639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.039650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.039819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.039830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.039946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.039957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.040080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.040091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.040178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.040188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.040284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.040294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.040403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.040412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.040572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.040582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.040675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.040686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.040804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.040820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.040924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.040938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.041021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.041034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.041200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.041213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.041333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.041347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.041443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.041457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.041558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.041572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.041689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.041703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.041796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.041809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.041980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.041994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.042110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.042123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.042233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.042246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.042356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.042370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.042465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.042482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.042678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.042692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.042845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.042860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.553 [2024-07-15 17:08:52.042960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.553 [2024-07-15 17:08:52.042974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.553 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.043077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.043091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.043203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.043217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.043384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.043398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.043576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.043590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.043757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.043770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.043862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.043875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.044055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.044069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.044171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.044185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.044367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.044382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.044489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.044503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.044617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.044631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.044810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.044824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.044923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.044937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.045060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.045074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.045262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.045276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.045364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.045378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.045556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.045570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.045672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.045686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.045797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.045811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.045919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.045932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.046033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.046044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.046140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.046150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.046248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.046259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.046347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.046357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.046509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.046519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.046627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.046637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.046805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.046815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.046877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.046887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.047084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.047093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.047161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.047171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.047347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.047358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.047444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.047454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.047561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.047571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.047735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.047745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.047837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.047847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.047952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.047962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.048050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.048062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.048171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.048181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.048280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.048290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.048392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.048402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.048475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.048485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.048604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.048615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.048699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.048708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.048801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.048811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.048957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.048968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.049061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.049071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.049170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.049180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.049254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.049264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.049376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.049386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.049473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.049483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.049644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.049654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.049753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.049763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.049925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.049935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.554 qpair failed and we were unable to recover it. 00:26:45.554 [2024-07-15 17:08:52.050021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.554 [2024-07-15 17:08:52.050031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.050122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.050132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.050220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.050241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.050336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.050347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.050445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.050455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.050548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.050558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.050639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.050649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.050736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.050746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.050857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.050867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.050973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.050983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.051068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.051084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.051262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.051276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.051388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.051401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.051515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.051529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.051638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.051652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.051827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.051841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.051939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.051953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.052065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.052079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.052182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.052196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.052428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.052442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.052540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.052554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.052672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.052686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.052878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.052892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.053003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.053019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.053096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.053109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.053201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.053214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.053334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.053348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.053521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.053535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.053626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.053640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.053728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.053742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.053911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.053925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.054039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.054053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.054153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.054167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.054384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.054398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.054496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.054510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.054606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.054619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.054730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.054744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.054932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.054946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.055037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.055051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.055153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.055167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.055258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.055272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.055383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.055397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.055495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.055509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.055745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.055759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.055882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.055896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.056004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.056018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.056117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.056130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.056239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.056253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.056377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.056399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.056515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.056528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.056626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.056638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.056809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.056819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.056927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.056938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.057036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.057046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.555 qpair failed and we were unable to recover it. 00:26:45.555 [2024-07-15 17:08:52.057143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.555 [2024-07-15 17:08:52.057153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.057304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.057314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.057410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.057420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.057512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.057521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.057619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.057629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.057788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.057797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.057903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.057913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.058002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.058013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.058112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.058122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.058223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.058237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.058461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.058472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.058574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.058584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.058741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.058751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.058845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.058855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.058958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.058968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.059194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.059204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.059380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.059390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.059509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.059520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.059620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.059630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.059722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.059733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.059889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.059900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.060008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.060018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.060112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.060121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.060218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.060235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.060332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.060342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.060447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.060457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.060548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.060558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.060673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.060683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.060859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.060869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.060970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.060980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.061139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.061149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.061265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.061275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.061382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.061393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.061557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.061566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.061730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.061740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.061855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.061864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.061964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.061975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.062067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.062078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.062213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.062223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.062346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.062356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.062445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.062455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.062542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.062553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.062651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.062661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.062757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.062768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.062879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.062889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.062988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.062998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.063090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.063100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.063189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.063200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.063295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.063305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.556 qpair failed and we were unable to recover it. 00:26:45.556 [2024-07-15 17:08:52.063418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.556 [2024-07-15 17:08:52.063428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.063533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.063543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.063633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.063643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.063763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.063772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.063837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.063846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.064003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.064013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.064124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.064134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.064244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.064254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.064361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.064371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.064527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.064536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.064634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.064644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.064811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.064821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.064912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.064922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.065099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.065108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.065236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.065247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.065350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.065360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.065467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.065477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.065629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.065639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.065735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.065745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.065837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.065847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.066009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.066019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.066138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.066148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.066253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.066263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.066356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.066366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.066447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.066457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.066548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.066558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.066635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.066645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.066812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.066825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.066922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.066932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.067025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.067035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.067196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.067206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.067307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.067317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.067412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.067423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.067530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.067540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.067636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.067646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.067748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.067758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.067851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.067861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.068071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.068081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.068182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.068192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.068301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.068311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.068409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.068419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.068585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.068595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.068771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.068781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.068964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.068974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.069082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.069092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.069312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.069322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.069481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.069491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.557 qpair failed and we were unable to recover it. 00:26:45.557 [2024-07-15 17:08:52.069705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.557 [2024-07-15 17:08:52.069715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.069834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.069844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.069931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.069941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.070105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.070115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.070291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.070301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.070410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.070421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.070529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.070539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.070633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.070642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.070798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.070808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.070901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.070911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.071021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.071031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.071138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.071148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.071238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.071248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.071334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.071344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.071511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.071521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.071625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.071634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.071754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.071764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.071857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.071867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.071960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.071971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.072191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.072201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.072363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.072375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.072477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.072487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.072649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.072660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:45.558 [2024-07-15 17:08:52.072830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.072842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.072939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.072950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:45.558 [2024-07-15 17:08:52.073119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.073131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.073300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.073312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:45.558 [2024-07-15 17:08:52.073412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.073423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.073526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.073536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:45.558 [2024-07-15 17:08:52.073640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.073652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.073765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.073776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.073880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.073891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.558 [2024-07-15 17:08:52.073985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.073997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.074164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.074175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.074304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.074315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.074406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.074416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.074518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.074528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.074647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.074657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.074749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.074759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.074875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.074885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.074974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.074986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.075097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.075108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.075198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.075207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.075305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.075314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.075542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.075553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.075665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.075675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.075792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.075802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.075906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.075917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.076091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.076101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.076207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.076217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.076322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.076332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.076440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.076451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.076549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.076561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.076673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.076683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.076780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.076791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.076972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.076982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.077146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.077157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.077272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.077283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.077444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.077457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.077568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.077581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.077681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.077693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.077794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.077804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.077936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.077946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.078044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.078054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.078160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.558 [2024-07-15 17:08:52.078170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.558 qpair failed and we were unable to recover it. 00:26:45.558 [2024-07-15 17:08:52.078292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.078304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.078438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.078448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.078598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.078608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.078769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.078779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.078895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.078905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.079059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.079068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.079279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.079290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.079407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.079417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.079537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.079547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.079717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.079727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.079826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.079836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.079944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.079954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.080045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.080056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.080153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.080163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.080277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.080288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.080439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.080449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.080552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.080564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.080666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.080676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.080762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.080773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.080867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.080878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.080989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.081002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.081110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.081121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.081246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.081257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.081366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.081376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.081532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.081542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.081641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.081652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.081753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.081763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.081842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.081852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.081958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.081968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.082077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.082087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.082194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.082204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.082303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.082314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.082472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.082482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.082644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.082656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.082817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.082827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.082937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.082948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.083047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.083059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.083148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.083160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.083319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.083330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.083491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.083502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.083663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.083672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.083781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.083791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.083863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.083873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.083971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.083981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.084070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.084080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.084190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.084200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.084357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.084370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.084491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.084501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.084593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.084605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.084715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.084726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.084835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.084846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.084945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.084957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.085065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.085076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.085177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.085187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.085318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.085329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.085433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.085443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.085560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.085570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.085673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.559 [2024-07-15 17:08:52.085683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.559 qpair failed and we were unable to recover it. 00:26:45.559 [2024-07-15 17:08:52.085772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.085783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.085890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.085900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.086014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.086024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.086120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.086130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.086218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.086232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.086314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.086324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.086435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.086445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.086548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.086558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.086655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.086665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.086765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.086775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.086905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.086916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.087009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.087019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.087111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.087122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.087233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.087246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.087404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.087414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.087524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.087536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.087693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.087704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.087791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.087801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.087890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.087900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.087989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.087998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.088093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.088103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.088191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.088202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.088296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.088306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.088399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.088410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.088512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.088523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.088622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.088632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.088722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.088732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.088824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.088834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.088942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.088952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.089047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.089058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.089156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.089166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.089275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.089285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.089445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.089455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.089561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.089571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.089747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.089758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.089870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.089880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.089970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.089980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.090069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.090079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.090179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.090189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.090303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.090314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.090418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.090428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.090520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.090530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.090629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.090639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.090727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.090737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.090828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.090838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.090932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.090942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.091044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.091054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.091167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.091177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.091284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.091294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.091388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.091399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.091506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.091516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.091630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.091639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.091731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.091741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.091839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.091849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.091962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.091972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.092142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.092155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.092292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.092303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.092407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.092418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.092488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.092498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.092614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.092625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.092717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.092727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.092888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.092898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.092994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.093004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.093135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.093145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.093237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.093247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.093345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.093354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.093459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.093470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.560 qpair failed and we were unable to recover it. 00:26:45.560 [2024-07-15 17:08:52.093561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.560 [2024-07-15 17:08:52.093571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.093670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.093679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.093841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.093851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.093944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.093954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.094097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.094106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.094268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.094279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.094375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.094385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.094486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.094497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.094598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.094608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.094737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.094747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.094853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.094863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.094969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.094979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.095080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.095090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.095265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.095275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.095414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.095423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.095516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.095526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.095620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.095630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.095748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.095758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.095873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.095882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.095985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.095996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.096084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.096094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.096186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.096196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.096287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.096297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.096388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.096398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.096500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.096510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.096619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.096629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.096732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.096742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.096857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.096867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.096970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.096981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.097090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.097100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.097261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.097272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.097376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.097387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.097546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.097557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.097720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.097731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.097845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.097855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.097960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.097970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.098073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.098084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.098182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.098195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.098363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.098374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.098540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.098550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.098655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.098665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.098777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.098788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.098884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.098894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.098996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.099006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.099115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.099125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.099218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.099244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.099343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.099355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.099452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.099462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.099577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.099588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.099654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.099665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.099755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.099765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.099860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.099870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.099962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.099972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.100129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.100139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.100246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.100257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.100367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.100377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.100506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.100516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.100606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.100616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.100726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.100736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.100832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.100842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.101011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.101021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.101126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.101137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.101294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.101305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.101441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.101451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.101544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.101553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.101714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.101724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.101895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.101905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.102000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.561 [2024-07-15 17:08:52.102011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.561 qpair failed and we were unable to recover it. 00:26:45.561 [2024-07-15 17:08:52.102174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.102185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.102359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.102369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.102466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.102476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.102584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.102594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.102692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.102702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.102799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.102809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.102941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.102952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.103047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.103057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.103146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.103156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.103263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.103273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.103437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.103447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.103542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.103552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.103694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.103704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.103793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.103803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.103894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.103904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.103991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.104001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.104113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.104124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.104262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.104272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.104383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.104393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.104489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.104499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.104598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.104608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.104803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.104815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.104908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.104920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.105019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.105029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.105127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.105138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.105250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.105260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.105365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.105375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.105470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.105480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.105570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.105580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.105680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.105690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.105811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.105822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.105922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.105933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.106041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.106051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.106206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.106216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.106319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.106329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.106433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.106443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.106600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.106610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.106701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.106712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.106820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.106831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.106943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.106953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.107047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.107059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.107162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.107172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.107280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.107290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.107392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.107402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.107469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.107479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.107571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.107581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.107685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.107695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.107806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.107816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.107909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.107919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.108013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.108029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.562 [2024-07-15 17:08:52.108122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.108134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.108235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.108245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.108376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.108386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:45.562 [2024-07-15 17:08:52.108554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.108567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.108663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.108673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.108783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.108794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.562 [2024-07-15 17:08:52.108899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.108909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.109012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.109022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.562 [2024-07-15 17:08:52.109161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.562 [2024-07-15 17:08:52.109173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.562 qpair failed and we were unable to recover it. 00:26:45.562 [2024-07-15 17:08:52.109283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.109294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.109386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.109396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.109487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.109497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.109599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.109610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.109728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.109738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.109828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.109839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.109932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.109945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.110036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.110046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.110201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.110211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.110282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.110293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.110397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.110407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.110504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.110514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.110648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.110658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.110762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.110772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.110887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.110897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.110984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.110994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.111086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.111096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.111188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.111198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.111356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.111366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.111456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.111466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.111565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.111575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.111666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.111676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.111777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.111786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.111883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.111893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.111988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.111998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.112106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.112116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.112219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.112234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.112333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.112343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.112435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.112444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.112560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.112570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.112840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.112850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.112954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.112963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.113125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.113135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.113232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.113243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.113355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.113365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.113468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.113478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.113595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.113606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.113781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.113791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.113882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.113892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.113967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.113977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.114067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.114077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.114245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.114256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.114348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.114358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.114467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.114477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.114634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.114644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.114754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.114764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.114864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.114877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.114981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.114991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.115059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.115068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.115177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.115187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.115290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.115300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.115462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-07-15 17:08:52.115472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-07-15 17:08:52.115573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.115583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.115681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.115691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.115783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.115793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.115969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.115979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.116089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.116099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.116194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.116204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.116374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.116385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.116503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.116513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.116605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.116615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.116706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.116716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.116829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.116839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.116958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.116968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.117072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.117082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.117190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.117201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.117306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.117316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.117417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.117428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.117536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.117546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.117646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.117656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.117779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.117790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.117893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.117903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.118001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.118010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.118106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.118117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.118221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.118237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.118314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.118324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.118429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.118439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.118534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.118544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.118707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.118717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.118829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.118839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.118929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.118939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.119051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.119061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.119156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.119166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.119371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.119382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.119541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.119551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.119655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.119665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.119823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.119835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.119932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.119942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.120033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.120043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.120159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.120169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.120277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.120288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.120455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.120465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.120581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.120591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.120774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.120784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.120882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.120892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.120989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.120999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.121092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.121102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.121192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.121202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.121367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.121378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.121471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.121481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.121643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.121654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.121826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.121836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.121949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.121959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.122063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.122073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.122248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.122259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.122365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.122376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.122514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.122525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.122703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.122714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.122816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.122826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.123016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.123027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.123165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.123176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.123268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.123280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.123385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.123395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.123493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.123505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.123599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.123610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.123708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.123718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.123826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.123837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.123937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.123948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-07-15 17:08:52.124054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-07-15 17:08:52.124064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.124201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.124211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.124324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.124335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.124510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.124521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.124615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.124626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.124823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.124836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.124939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.124949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.125041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.125052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.125163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.125176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.125282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.125293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.125397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.125409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.125516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.125528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.125679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.125690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.125798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.125809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.125908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.125918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.126025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.126036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.126129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.126139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.126244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.126254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.126370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.126381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.126474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.126485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.126578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.126588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.126681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.126691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.126805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.126815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.126912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.126922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.127029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.127040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.127126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.127136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.127236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.127247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.127346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.127356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.127445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.127455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.127548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.127558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.127650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.127660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.127754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.127764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.127876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.127886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.127985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.127994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.128092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.128102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.128193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.128203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.128359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.128369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.128469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.128478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.128568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.128578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.128671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.128681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.128749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.128759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.128942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.128952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 Malloc0 00:26:45.565 [2024-07-15 17:08:52.129109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.129119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.129206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.129216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.129320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.129330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.129427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.129436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.129521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.129531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.129689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.129700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.565 [2024-07-15 17:08:52.129796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.129806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.129909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.129919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.130082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:45.565 [2024-07-15 17:08:52.130092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.130195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.130205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.130319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.130330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.565 [2024-07-15 17:08:52.130450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.130460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.130626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.130637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.565 [2024-07-15 17:08:52.130727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.130737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.130898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.130909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.131004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.131015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.131124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.131134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.131234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.131245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.131357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.131367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.131465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.131475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.131538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.131548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.131639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.131649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.131758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.131768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.131875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-07-15 17:08:52.131885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-07-15 17:08:52.131976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.131985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.132142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.132152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.132313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.132323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.132415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.132425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.132507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.132519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.132617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.132626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.132806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.132816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.132906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.132918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.133077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.133087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.133194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.133204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.133305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.133315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.133408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.133418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.133575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.133585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.133746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.133756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.133830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.133840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.133998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.134009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.134113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.134123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.134255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.134265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.134354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.134365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.134465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.134475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.134566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.134576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.134679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.134689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.134783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.134792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.134882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.134892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.134986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.134996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.135099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.135109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.135304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.135314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.135487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.135498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.135658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.135668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.135763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.135773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.135867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.135877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.135972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.135982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.136096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.136106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.136194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.136204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.136220] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.566 [2024-07-15 17:08:52.136313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.136323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.136422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.136431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.136531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.136541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.136631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.136640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.136739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.136749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.136831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.136840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.136930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.136940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.137032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.137042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.137197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.137207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.137373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.137383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.137476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.137486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.137621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.137631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.137811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.137822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.137936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.137948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.138047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.138057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.138175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.138185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.138290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.138300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.138385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.138394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.138492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.138501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.138596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.138605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.138708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.138717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.138810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.138820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.138947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.138957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.139059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.139069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.139162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.139172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.139286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.139296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.139397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.566 [2024-07-15 17:08:52.139407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.566 qpair failed and we were unable to recover it. 00:26:45.566 [2024-07-15 17:08:52.139504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.139514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.139651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.139661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.139779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.139789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.139883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.139893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.139998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.140008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.140100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.140109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.140217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.140232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.140370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.140380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.140467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.140477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.140575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.140585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.140672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.140681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.140781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.140791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.140882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.140892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.141055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.141066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.141156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.141167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.141348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.141359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.141453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.141463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.141555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.141565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.141667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.141677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.141835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.141845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.141934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.141944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.142057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.142067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.142155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.142165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.142276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.142287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.142380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.142392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.142482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.142493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.142600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.142612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.142712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.142722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.142825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.142836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.142937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.142947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.143041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.143050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.143152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.143162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.143258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.143268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.143363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.143373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.143464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.143475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.143570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.143580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.143759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.143769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.143884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.143893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.143977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.143987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.144059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.144069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.144171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.144181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.144299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.144309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.144416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.144426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.144520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.144530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.144716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.144726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.567 [2024-07-15 17:08:52.144936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.144950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.145072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.145082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.145201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.145211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:45.567 [2024-07-15 17:08:52.145316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.145330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.145429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.145440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.145591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.145602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.567 [2024-07-15 17:08:52.145717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.145729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.567 [2024-07-15 17:08:52.145911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.145922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.146037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.146047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.146160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.146170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.146270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.146280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.146378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.146388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.146488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.146498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.146591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.146601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.146721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.146732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.146890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.146900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.146990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.147000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.147103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.147113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.147193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.147202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.147365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.147377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.147560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.567 [2024-07-15 17:08:52.147571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.567 qpair failed and we were unable to recover it. 00:26:45.567 [2024-07-15 17:08:52.147736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.147746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.147914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.147925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.148030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.148041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.148149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.148159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.148270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.148280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.148389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.148400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.148493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.148503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.148602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.148612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.148711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.148721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.148880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.148891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.148999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.149009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.149115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.149126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.149277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.149287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.149455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.149465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.149569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.149579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.149686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.149696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.149787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.149797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.149899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.149909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.150004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.150015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.150111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.150122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.150286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.150297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.150397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.150407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.150508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.150518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.150686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.150696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.150815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.150826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.150919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.150930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.151087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.151098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.151255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.151266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.151372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.151382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.151499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.151508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.151659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.151669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.151766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.151775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.151881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.151891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.151993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.152003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.152105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.152115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.152217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.152231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.152387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.152396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.152490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.152500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.152595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.152606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.152715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.152725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.152826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.152836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.152935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.152946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.153103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.153113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.153240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.153250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.568 [2024-07-15 17:08:52.153342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.568 [2024-07-15 17:08:52.153352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.568 qpair failed and we were unable to recover it. 00:26:45.829 [2024-07-15 17:08:52.153441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.829 [2024-07-15 17:08:52.153452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.829 qpair failed and we were unable to recover it. 00:26:45.829 [2024-07-15 17:08:52.153552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.829 [2024-07-15 17:08:52.153562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.829 qpair failed and we were unable to recover it. 00:26:45.829 [2024-07-15 17:08:52.153656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.829 [2024-07-15 17:08:52.153667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.829 qpair failed and we were unable to recover it. 00:26:45.829 [2024-07-15 17:08:52.153826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.829 [2024-07-15 17:08:52.153837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.829 qpair failed and we were unable to recover it. 00:26:45.829 [2024-07-15 17:08:52.153939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.829 [2024-07-15 17:08:52.153949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.829 qpair failed and we were unable to recover it. 00:26:45.829 [2024-07-15 17:08:52.154093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.829 [2024-07-15 17:08:52.154104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.829 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.154247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.154259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.154367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.154377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.154475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.154485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.154595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.154605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.155409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.155434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.155561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.155573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.155676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.155687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.155806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.155816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.155993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.156003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.156169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.156179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.156294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.156306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.156447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.156457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.156576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.156586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.156686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.156696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.156859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.156873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.156897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0000 (9): Bad file descriptor 00:26:45.830 [2024-07-15 17:08:52.157008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.157032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.157182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:45.830 [2024-07-15 17:08:52.157217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.157361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.157377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d54000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.157494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.157506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.830 [2024-07-15 17:08:52.157620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.157631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.157733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.157743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.830 [2024-07-15 17:08:52.157845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.157855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.157956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.157966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.158054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.158064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.158161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.158172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.158271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.158283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.158443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.158453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.158599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.158609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.158699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.158709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.158845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.158856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.158961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.158971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.159066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.159076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.159178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.159188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.159291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.159302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.159397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.159407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.159504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.159514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.159659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.159670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.159780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.159791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.159909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.159920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.160030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.160041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.160185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.160195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.160356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.160367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.160470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.160481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.160582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.160593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.160712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.160722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.160829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.160840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.160947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.160957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.161052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.161063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.161150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.161161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.161252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.161262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.161353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.161363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.161458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.161468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.161584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.161603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.161777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.161791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.161895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.161910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.162059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.162075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.162175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.162189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.162353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.162368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.162478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.162492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.162673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.162687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.162791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.162806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.162967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.162981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.163093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.163108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.163253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.163269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.830 qpair failed and we were unable to recover it. 00:26:45.830 [2024-07-15 17:08:52.163355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.830 [2024-07-15 17:08:52.163369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.163513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.163527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.163606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.163621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.163754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.163768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.163948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.163962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.164128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.164142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.164250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.164265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.164359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.164373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.164503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.164517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd91ed0 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.164695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.164708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.164807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.164818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.831 [2024-07-15 17:08:52.164977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.164987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.165146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.165157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.165262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.165273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.831 [2024-07-15 17:08:52.165443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.165454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.165554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.165563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.831 [2024-07-15 17:08:52.165688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.165699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.165808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.165818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.165918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.165929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.831 [2024-07-15 17:08:52.166021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.166032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.166135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.166145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.166309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.166320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.166423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.166434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.166536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.166546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.166651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.166661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.166770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.166781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.166880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.166890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.166986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.166997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.167091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.167100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.167258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.167268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.167360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.167371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.167461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.167471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.167568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.167578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.167675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.167685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.167784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.167794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.167884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.167894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.168050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.168060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.168148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.168159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.168262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.168272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.168371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.831 [2024-07-15 17:08:52.168382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d4c000b90 with addr=10.0.0.2, port=4420 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.168456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.831 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.831 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:45.831 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.831 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.831 [2024-07-15 17:08:52.176764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.831 [2024-07-15 17:08:52.176846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.831 [2024-07-15 17:08:52.176864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.831 [2024-07-15 17:08:52.176872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.831 [2024-07-15 17:08:52.176878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.831 [2024-07-15 17:08:52.176897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.831 17:08:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 238355 00:26:45.831 [2024-07-15 17:08:52.186730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.831 [2024-07-15 17:08:52.186798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.831 [2024-07-15 17:08:52.186814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.831 [2024-07-15 17:08:52.186821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.831 [2024-07-15 17:08:52.186827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.831 [2024-07-15 17:08:52.186842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.196783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.831 [2024-07-15 17:08:52.196848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.831 [2024-07-15 17:08:52.196865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.831 [2024-07-15 17:08:52.196872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.831 [2024-07-15 17:08:52.196879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.831 [2024-07-15 17:08:52.196894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.206724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.831 [2024-07-15 17:08:52.206792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.831 [2024-07-15 17:08:52.206807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.831 [2024-07-15 17:08:52.206816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.831 [2024-07-15 17:08:52.206822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.831 [2024-07-15 17:08:52.206837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.216738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.831 [2024-07-15 17:08:52.216808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.831 [2024-07-15 17:08:52.216823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.831 [2024-07-15 17:08:52.216830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.831 [2024-07-15 17:08:52.216837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.831 [2024-07-15 17:08:52.216852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.226794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.831 [2024-07-15 17:08:52.226853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.831 [2024-07-15 17:08:52.226868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.831 [2024-07-15 17:08:52.226875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.831 [2024-07-15 17:08:52.226880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.831 [2024-07-15 17:08:52.226894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.236765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.831 [2024-07-15 17:08:52.236839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.831 [2024-07-15 17:08:52.236853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.831 [2024-07-15 17:08:52.236860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.831 [2024-07-15 17:08:52.236866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.831 [2024-07-15 17:08:52.236880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.246863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.831 [2024-07-15 17:08:52.246930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.831 [2024-07-15 17:08:52.246945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.831 [2024-07-15 17:08:52.246952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.831 [2024-07-15 17:08:52.246958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.831 [2024-07-15 17:08:52.246973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.831 qpair failed and we were unable to recover it. 00:26:45.831 [2024-07-15 17:08:52.256856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.256921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.256937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.256943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.256949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.256963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.266933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.266990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.267004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.267011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.267016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.267030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.276919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.276994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.277008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.277015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.277021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.277035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.286943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.287006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.287020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.287027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.287033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.287048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.296988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.297053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.297070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.297079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.297086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.297100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.307069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.307182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.307199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.307206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.307212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.307230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.317065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.317128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.317143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.317150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.317156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.317170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.327092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.327171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.327186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.327192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.327198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.327213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.337114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.337180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.337194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.337200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.337206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.337223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.347144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.347203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.347218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.347228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.347234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.347248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.357171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.357231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.357246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.357252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.357258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.357272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.367187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.367257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.367272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.367279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.367284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.367299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.377235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.377299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.377313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.377320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.377326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.377340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.387300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.387366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.387383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.387390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.387395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.387410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.397404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.397480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.397494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.397500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.397506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.397520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.407367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.407434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.407448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.407454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.407460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.407474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.417406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.417469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.417482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.417489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.417495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.417508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.427470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.427573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.427587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.427593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.427599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.427617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.437452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.437510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.437524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.437530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.437536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.437550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.447445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.447508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.447523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.447530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.447536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.447550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.457526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.457590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.832 [2024-07-15 17:08:52.457605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.832 [2024-07-15 17:08:52.457612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.832 [2024-07-15 17:08:52.457617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.832 [2024-07-15 17:08:52.457631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.832 qpair failed and we were unable to recover it. 00:26:45.832 [2024-07-15 17:08:52.467582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.832 [2024-07-15 17:08:52.467642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.833 [2024-07-15 17:08:52.467657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.833 [2024-07-15 17:08:52.467663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.833 [2024-07-15 17:08:52.467669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.833 [2024-07-15 17:08:52.467684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.833 qpair failed and we were unable to recover it. 00:26:45.833 [2024-07-15 17:08:52.477548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.833 [2024-07-15 17:08:52.477617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.833 [2024-07-15 17:08:52.477632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.833 [2024-07-15 17:08:52.477638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.833 [2024-07-15 17:08:52.477644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.833 [2024-07-15 17:08:52.477659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.833 qpair failed and we were unable to recover it. 00:26:45.833 [2024-07-15 17:08:52.487560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.833 [2024-07-15 17:08:52.487625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.833 [2024-07-15 17:08:52.487640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.833 [2024-07-15 17:08:52.487646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.833 [2024-07-15 17:08:52.487652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:45.833 [2024-07-15 17:08:52.487666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.833 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.497634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.497738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.497753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.497760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.497766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.092 [2024-07-15 17:08:52.497780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.092 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.507643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.507709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.507723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.507730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.507736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.092 [2024-07-15 17:08:52.507750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.092 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.517643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.517709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.517723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.517730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.517739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.092 [2024-07-15 17:08:52.517753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.092 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.527645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.527708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.527722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.527729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.527734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.092 [2024-07-15 17:08:52.527748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.092 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.537732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.537795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.537809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.537815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.537821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.092 [2024-07-15 17:08:52.537835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.092 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.547734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.547791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.547805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.547812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.547817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.092 [2024-07-15 17:08:52.547831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.092 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.557750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.557808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.557822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.557828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.557834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.092 [2024-07-15 17:08:52.557847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.092 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.567789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.567854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.567869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.567875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.567881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.092 [2024-07-15 17:08:52.567895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.092 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.577831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.577897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.577911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.577917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.577923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.092 [2024-07-15 17:08:52.577937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.092 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.587843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.587899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.587913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.587920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.587925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.092 [2024-07-15 17:08:52.587940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.092 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.597875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.597937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.597951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.597957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.597963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.092 [2024-07-15 17:08:52.597977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.092 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.607913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.607975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.607990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.607999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.608004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.092 [2024-07-15 17:08:52.608019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.092 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.617873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.617939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.617953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.617960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.617965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.092 [2024-07-15 17:08:52.617980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.092 qpair failed and we were unable to recover it. 00:26:46.092 [2024-07-15 17:08:52.627965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.092 [2024-07-15 17:08:52.628028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.092 [2024-07-15 17:08:52.628042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.092 [2024-07-15 17:08:52.628048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.092 [2024-07-15 17:08:52.628054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.628068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.093 [2024-07-15 17:08:52.638020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.093 [2024-07-15 17:08:52.638081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.093 [2024-07-15 17:08:52.638095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.093 [2024-07-15 17:08:52.638101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.093 [2024-07-15 17:08:52.638107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.638121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.093 [2024-07-15 17:08:52.648034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.093 [2024-07-15 17:08:52.648096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.093 [2024-07-15 17:08:52.648111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.093 [2024-07-15 17:08:52.648117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.093 [2024-07-15 17:08:52.648123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.648136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.093 [2024-07-15 17:08:52.658062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.093 [2024-07-15 17:08:52.658125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.093 [2024-07-15 17:08:52.658139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.093 [2024-07-15 17:08:52.658146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.093 [2024-07-15 17:08:52.658151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.658165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.093 [2024-07-15 17:08:52.668141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.093 [2024-07-15 17:08:52.668205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.093 [2024-07-15 17:08:52.668219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.093 [2024-07-15 17:08:52.668229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.093 [2024-07-15 17:08:52.668235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.668250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.093 [2024-07-15 17:08:52.678125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.093 [2024-07-15 17:08:52.678183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.093 [2024-07-15 17:08:52.678198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.093 [2024-07-15 17:08:52.678204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.093 [2024-07-15 17:08:52.678211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.678228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.093 [2024-07-15 17:08:52.688163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.093 [2024-07-15 17:08:52.688246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.093 [2024-07-15 17:08:52.688260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.093 [2024-07-15 17:08:52.688267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.093 [2024-07-15 17:08:52.688273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.688288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.093 [2024-07-15 17:08:52.698177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.093 [2024-07-15 17:08:52.698249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.093 [2024-07-15 17:08:52.698264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.093 [2024-07-15 17:08:52.698274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.093 [2024-07-15 17:08:52.698279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.698294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.093 [2024-07-15 17:08:52.708206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.093 [2024-07-15 17:08:52.708269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.093 [2024-07-15 17:08:52.708283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.093 [2024-07-15 17:08:52.708290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.093 [2024-07-15 17:08:52.708296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.708310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.093 [2024-07-15 17:08:52.718240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.093 [2024-07-15 17:08:52.718301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.093 [2024-07-15 17:08:52.718315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.093 [2024-07-15 17:08:52.718322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.093 [2024-07-15 17:08:52.718328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.718343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.093 [2024-07-15 17:08:52.728277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.093 [2024-07-15 17:08:52.728343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.093 [2024-07-15 17:08:52.728358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.093 [2024-07-15 17:08:52.728364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.093 [2024-07-15 17:08:52.728370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.728384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.093 [2024-07-15 17:08:52.738338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.093 [2024-07-15 17:08:52.738404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.093 [2024-07-15 17:08:52.738419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.093 [2024-07-15 17:08:52.738425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.093 [2024-07-15 17:08:52.738431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.738444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.093 [2024-07-15 17:08:52.748336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.093 [2024-07-15 17:08:52.748399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.093 [2024-07-15 17:08:52.748414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.093 [2024-07-15 17:08:52.748420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.093 [2024-07-15 17:08:52.748426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.748440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.093 [2024-07-15 17:08:52.758351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.093 [2024-07-15 17:08:52.758412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.093 [2024-07-15 17:08:52.758426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.093 [2024-07-15 17:08:52.758432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.093 [2024-07-15 17:08:52.758437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.093 [2024-07-15 17:08:52.758452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.093 qpair failed and we were unable to recover it. 00:26:46.354 [2024-07-15 17:08:52.768405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.354 [2024-07-15 17:08:52.768468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.354 [2024-07-15 17:08:52.768482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.354 [2024-07-15 17:08:52.768489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.354 [2024-07-15 17:08:52.768495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.354 [2024-07-15 17:08:52.768510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-07-15 17:08:52.778417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.354 [2024-07-15 17:08:52.778481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.354 [2024-07-15 17:08:52.778497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.354 [2024-07-15 17:08:52.778504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.354 [2024-07-15 17:08:52.778509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.354 [2024-07-15 17:08:52.778525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-07-15 17:08:52.788449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.354 [2024-07-15 17:08:52.788509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.354 [2024-07-15 17:08:52.788527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.354 [2024-07-15 17:08:52.788534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.354 [2024-07-15 17:08:52.788539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.354 [2024-07-15 17:08:52.788553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-07-15 17:08:52.798510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.354 [2024-07-15 17:08:52.798623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.354 [2024-07-15 17:08:52.798638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.354 [2024-07-15 17:08:52.798645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.354 [2024-07-15 17:08:52.798651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.354 [2024-07-15 17:08:52.798665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-07-15 17:08:52.808457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.354 [2024-07-15 17:08:52.808519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.354 [2024-07-15 17:08:52.808534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.354 [2024-07-15 17:08:52.808540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.354 [2024-07-15 17:08:52.808546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.354 [2024-07-15 17:08:52.808560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-07-15 17:08:52.818534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.354 [2024-07-15 17:08:52.818595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.354 [2024-07-15 17:08:52.818610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.354 [2024-07-15 17:08:52.818617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.354 [2024-07-15 17:08:52.818622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.354 [2024-07-15 17:08:52.818636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-07-15 17:08:52.828574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.354 [2024-07-15 17:08:52.828630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.354 [2024-07-15 17:08:52.828644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.354 [2024-07-15 17:08:52.828650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.354 [2024-07-15 17:08:52.828656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.354 [2024-07-15 17:08:52.828674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-07-15 17:08:52.838599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.354 [2024-07-15 17:08:52.838659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.354 [2024-07-15 17:08:52.838674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.354 [2024-07-15 17:08:52.838680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.354 [2024-07-15 17:08:52.838686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.354 [2024-07-15 17:08:52.838700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-07-15 17:08:52.848622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.354 [2024-07-15 17:08:52.848683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.354 [2024-07-15 17:08:52.848698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.354 [2024-07-15 17:08:52.848704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.354 [2024-07-15 17:08:52.848710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.354 [2024-07-15 17:08:52.848725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-07-15 17:08:52.858724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.354 [2024-07-15 17:08:52.858789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.354 [2024-07-15 17:08:52.858803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.354 [2024-07-15 17:08:52.858810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.354 [2024-07-15 17:08:52.858816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.354 [2024-07-15 17:08:52.858830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-07-15 17:08:52.868698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.354 [2024-07-15 17:08:52.868759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.354 [2024-07-15 17:08:52.868773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.354 [2024-07-15 17:08:52.868779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.354 [2024-07-15 17:08:52.868785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.354 [2024-07-15 17:08:52.868799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-07-15 17:08:52.878730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.354 [2024-07-15 17:08:52.878790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.354 [2024-07-15 17:08:52.878807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.354 [2024-07-15 17:08:52.878814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.354 [2024-07-15 17:08:52.878819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.354 [2024-07-15 17:08:52.878833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-07-15 17:08:52.888746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.354 [2024-07-15 17:08:52.888823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.354 [2024-07-15 17:08:52.888837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.354 [2024-07-15 17:08:52.888844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.354 [2024-07-15 17:08:52.888850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:52.888863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-07-15 17:08:52.898839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.355 [2024-07-15 17:08:52.898907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.355 [2024-07-15 17:08:52.898922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.355 [2024-07-15 17:08:52.898928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.355 [2024-07-15 17:08:52.898934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:52.898948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-07-15 17:08:52.908803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.355 [2024-07-15 17:08:52.908865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.355 [2024-07-15 17:08:52.908879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.355 [2024-07-15 17:08:52.908885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.355 [2024-07-15 17:08:52.908891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:52.908905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-07-15 17:08:52.918854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.355 [2024-07-15 17:08:52.918911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.355 [2024-07-15 17:08:52.918925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.355 [2024-07-15 17:08:52.918932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.355 [2024-07-15 17:08:52.918940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:52.918955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-07-15 17:08:52.928863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.355 [2024-07-15 17:08:52.928925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.355 [2024-07-15 17:08:52.928939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.355 [2024-07-15 17:08:52.928946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.355 [2024-07-15 17:08:52.928951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:52.928965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-07-15 17:08:52.938934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.355 [2024-07-15 17:08:52.938996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.355 [2024-07-15 17:08:52.939011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.355 [2024-07-15 17:08:52.939017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.355 [2024-07-15 17:08:52.939023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:52.939037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-07-15 17:08:52.948914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.355 [2024-07-15 17:08:52.948972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.355 [2024-07-15 17:08:52.948986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.355 [2024-07-15 17:08:52.948993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.355 [2024-07-15 17:08:52.948999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:52.949013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-07-15 17:08:52.958940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.355 [2024-07-15 17:08:52.959001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.355 [2024-07-15 17:08:52.959015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.355 [2024-07-15 17:08:52.959021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.355 [2024-07-15 17:08:52.959027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:52.959040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-07-15 17:08:52.968969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.355 [2024-07-15 17:08:52.969057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.355 [2024-07-15 17:08:52.969071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.355 [2024-07-15 17:08:52.969077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.355 [2024-07-15 17:08:52.969083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:52.969097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-07-15 17:08:52.978974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.355 [2024-07-15 17:08:52.979063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.355 [2024-07-15 17:08:52.979077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.355 [2024-07-15 17:08:52.979084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.355 [2024-07-15 17:08:52.979090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:52.979104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-07-15 17:08:52.988994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.355 [2024-07-15 17:08:52.989063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.355 [2024-07-15 17:08:52.989078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.355 [2024-07-15 17:08:52.989084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.355 [2024-07-15 17:08:52.989090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:52.989103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-07-15 17:08:52.999052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.355 [2024-07-15 17:08:52.999115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.355 [2024-07-15 17:08:52.999129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.355 [2024-07-15 17:08:52.999136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.355 [2024-07-15 17:08:52.999142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:52.999156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-07-15 17:08:53.009085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.355 [2024-07-15 17:08:53.009148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.355 [2024-07-15 17:08:53.009162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.355 [2024-07-15 17:08:53.009172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.355 [2024-07-15 17:08:53.009178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:53.009192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-07-15 17:08:53.019127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.355 [2024-07-15 17:08:53.019194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.355 [2024-07-15 17:08:53.019209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.355 [2024-07-15 17:08:53.019215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.355 [2024-07-15 17:08:53.019221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.355 [2024-07-15 17:08:53.019240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.616 [2024-07-15 17:08:53.029133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.616 [2024-07-15 17:08:53.029199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.616 [2024-07-15 17:08:53.029214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.616 [2024-07-15 17:08:53.029220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.616 [2024-07-15 17:08:53.029230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.616 [2024-07-15 17:08:53.029245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.616 qpair failed and we were unable to recover it. 00:26:46.616 [2024-07-15 17:08:53.039208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.616 [2024-07-15 17:08:53.039270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.616 [2024-07-15 17:08:53.039285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.616 [2024-07-15 17:08:53.039291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.616 [2024-07-15 17:08:53.039297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.616 [2024-07-15 17:08:53.039312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.616 qpair failed and we were unable to recover it. 00:26:46.616 [2024-07-15 17:08:53.049192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.616 [2024-07-15 17:08:53.049257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.616 [2024-07-15 17:08:53.049271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.616 [2024-07-15 17:08:53.049278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.616 [2024-07-15 17:08:53.049283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.616 [2024-07-15 17:08:53.049297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.616 qpair failed and we were unable to recover it. 00:26:46.616 [2024-07-15 17:08:53.059235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.616 [2024-07-15 17:08:53.059318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.616 [2024-07-15 17:08:53.059332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.616 [2024-07-15 17:08:53.059338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.616 [2024-07-15 17:08:53.059344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.616 [2024-07-15 17:08:53.059358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.616 qpair failed and we were unable to recover it. 00:26:46.616 [2024-07-15 17:08:53.069231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.616 [2024-07-15 17:08:53.069291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.616 [2024-07-15 17:08:53.069305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.616 [2024-07-15 17:08:53.069311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.616 [2024-07-15 17:08:53.069317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.616 [2024-07-15 17:08:53.069331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.616 qpair failed and we were unable to recover it. 00:26:46.616 [2024-07-15 17:08:53.079291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.616 [2024-07-15 17:08:53.079353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.616 [2024-07-15 17:08:53.079367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.616 [2024-07-15 17:08:53.079374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.616 [2024-07-15 17:08:53.079379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.616 [2024-07-15 17:08:53.079393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.616 qpair failed and we were unable to recover it. 00:26:46.616 [2024-07-15 17:08:53.089324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.616 [2024-07-15 17:08:53.089401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.616 [2024-07-15 17:08:53.089416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.616 [2024-07-15 17:08:53.089422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.616 [2024-07-15 17:08:53.089427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.616 [2024-07-15 17:08:53.089441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.616 qpair failed and we were unable to recover it. 00:26:46.616 [2024-07-15 17:08:53.099342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.616 [2024-07-15 17:08:53.099406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.616 [2024-07-15 17:08:53.099420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.616 [2024-07-15 17:08:53.099429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.616 [2024-07-15 17:08:53.099436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.616 [2024-07-15 17:08:53.099450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.616 qpair failed and we were unable to recover it. 00:26:46.617 [2024-07-15 17:08:53.109383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.617 [2024-07-15 17:08:53.109446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.617 [2024-07-15 17:08:53.109461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.617 [2024-07-15 17:08:53.109467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.617 [2024-07-15 17:08:53.109473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.617 [2024-07-15 17:08:53.109487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.617 qpair failed and we were unable to recover it. 00:26:46.617 [2024-07-15 17:08:53.119399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.617 [2024-07-15 17:08:53.119460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.617 [2024-07-15 17:08:53.119475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.617 [2024-07-15 17:08:53.119481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.617 [2024-07-15 17:08:53.119487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.617 [2024-07-15 17:08:53.119501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.617 qpair failed and we were unable to recover it. 00:26:46.617 [2024-07-15 17:08:53.129442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.617 [2024-07-15 17:08:53.129506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.617 [2024-07-15 17:08:53.129520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.617 [2024-07-15 17:08:53.129527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.617 [2024-07-15 17:08:53.129532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.617 [2024-07-15 17:08:53.129546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.617 qpair failed and we were unable to recover it. 00:26:46.617 [2024-07-15 17:08:53.139534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.617 [2024-07-15 17:08:53.139596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.617 [2024-07-15 17:08:53.139611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.617 [2024-07-15 17:08:53.139617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.617 [2024-07-15 17:08:53.139623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.617 [2024-07-15 17:08:53.139637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.617 qpair failed and we were unable to recover it. 00:26:46.617 [2024-07-15 17:08:53.149488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.617 [2024-07-15 17:08:53.149552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.617 [2024-07-15 17:08:53.149567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.617 [2024-07-15 17:08:53.149573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.617 [2024-07-15 17:08:53.149579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.617 [2024-07-15 17:08:53.149593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.617 qpair failed and we were unable to recover it. 00:26:46.617 [2024-07-15 17:08:53.159512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.617 [2024-07-15 17:08:53.159574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.617 [2024-07-15 17:08:53.159588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.617 [2024-07-15 17:08:53.159595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.617 [2024-07-15 17:08:53.159601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.617 [2024-07-15 17:08:53.159615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.617 qpair failed and we were unable to recover it. 00:26:46.617 [2024-07-15 17:08:53.169553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.617 [2024-07-15 17:08:53.169615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.617 [2024-07-15 17:08:53.169629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.617 [2024-07-15 17:08:53.169636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.617 [2024-07-15 17:08:53.169641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.617 [2024-07-15 17:08:53.169656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.617 qpair failed and we were unable to recover it. 00:26:46.617 [2024-07-15 17:08:53.179570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.617 [2024-07-15 17:08:53.179637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.617 [2024-07-15 17:08:53.179652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.617 [2024-07-15 17:08:53.179660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.617 [2024-07-15 17:08:53.179666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.617 [2024-07-15 17:08:53.179680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.617 qpair failed and we were unable to recover it. 00:26:46.617 [2024-07-15 17:08:53.189618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.617 [2024-07-15 17:08:53.189685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.617 [2024-07-15 17:08:53.189702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.617 [2024-07-15 17:08:53.189708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.617 [2024-07-15 17:08:53.189714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.617 [2024-07-15 17:08:53.189728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.617 qpair failed and we were unable to recover it. 00:26:46.617 [2024-07-15 17:08:53.199686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.617 [2024-07-15 17:08:53.199750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.617 [2024-07-15 17:08:53.199766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.617 [2024-07-15 17:08:53.199772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.617 [2024-07-15 17:08:53.199778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.617 [2024-07-15 17:08:53.199792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.617 qpair failed and we were unable to recover it. 00:26:46.617 [2024-07-15 17:08:53.209667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.617 [2024-07-15 17:08:53.209732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.617 [2024-07-15 17:08:53.209746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.617 [2024-07-15 17:08:53.209753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.617 [2024-07-15 17:08:53.209758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.617 [2024-07-15 17:08:53.209773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.617 qpair failed and we were unable to recover it. 00:26:46.617 [2024-07-15 17:08:53.219702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.617 [2024-07-15 17:08:53.219761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.617 [2024-07-15 17:08:53.219777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.617 [2024-07-15 17:08:53.219784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.617 [2024-07-15 17:08:53.219790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.617 [2024-07-15 17:08:53.219803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.617 qpair failed and we were unable to recover it. 00:26:46.617 [2024-07-15 17:08:53.229697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.617 [2024-07-15 17:08:53.229760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.618 [2024-07-15 17:08:53.229775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.618 [2024-07-15 17:08:53.229782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.618 [2024-07-15 17:08:53.229788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.618 [2024-07-15 17:08:53.229805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.618 qpair failed and we were unable to recover it. 00:26:46.618 [2024-07-15 17:08:53.239770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.618 [2024-07-15 17:08:53.239859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.618 [2024-07-15 17:08:53.239873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.618 [2024-07-15 17:08:53.239880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.618 [2024-07-15 17:08:53.239886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.618 [2024-07-15 17:08:53.239901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.618 qpair failed and we were unable to recover it. 00:26:46.618 [2024-07-15 17:08:53.249842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.618 [2024-07-15 17:08:53.249904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.618 [2024-07-15 17:08:53.249919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.618 [2024-07-15 17:08:53.249926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.618 [2024-07-15 17:08:53.249932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.618 [2024-07-15 17:08:53.249948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.618 qpair failed and we were unable to recover it. 00:26:46.618 [2024-07-15 17:08:53.259827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.618 [2024-07-15 17:08:53.259891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.618 [2024-07-15 17:08:53.259906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.618 [2024-07-15 17:08:53.259913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.618 [2024-07-15 17:08:53.259919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.618 [2024-07-15 17:08:53.259933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.618 qpair failed and we were unable to recover it. 00:26:46.618 [2024-07-15 17:08:53.269846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.618 [2024-07-15 17:08:53.269908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.618 [2024-07-15 17:08:53.269923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.618 [2024-07-15 17:08:53.269930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.618 [2024-07-15 17:08:53.269935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.618 [2024-07-15 17:08:53.269949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.618 qpair failed and we were unable to recover it. 00:26:46.618 [2024-07-15 17:08:53.279808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.618 [2024-07-15 17:08:53.279871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.618 [2024-07-15 17:08:53.279891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.618 [2024-07-15 17:08:53.279897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.618 [2024-07-15 17:08:53.279903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.618 [2024-07-15 17:08:53.279917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.618 qpair failed and we were unable to recover it. 00:26:46.877 [2024-07-15 17:08:53.289885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.289987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.290001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.290008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.290014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.290028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.299922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.299983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.299998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.300004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.300010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.300024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.309957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.310014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.310028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.310035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.310041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.310055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.319908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.319972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.319989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.319996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.320006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.320020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.330036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.330119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.330133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.330140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.330145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.330159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.340052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.340116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.340131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.340138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.340144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.340157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.350088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.350163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.350177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.350184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.350189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.350204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.360117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.360179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.360193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.360200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.360205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.360219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.370149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.370243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.370258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.370264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.370270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.370284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.380192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.380256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.380271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.380277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.380283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.380297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.390208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.390288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.390303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.390309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.390315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.390329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.400159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.400218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.400237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.400243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.400249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.400263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.410263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.410329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.410343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.410350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.410358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.410373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.420276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.878 [2024-07-15 17:08:53.420346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.878 [2024-07-15 17:08:53.420360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.878 [2024-07-15 17:08:53.420367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.878 [2024-07-15 17:08:53.420372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.878 [2024-07-15 17:08:53.420387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-07-15 17:08:53.430355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.879 [2024-07-15 17:08:53.430413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.879 [2024-07-15 17:08:53.430427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.879 [2024-07-15 17:08:53.430434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.879 [2024-07-15 17:08:53.430439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.879 [2024-07-15 17:08:53.430454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-07-15 17:08:53.440337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.879 [2024-07-15 17:08:53.440399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.879 [2024-07-15 17:08:53.440413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.879 [2024-07-15 17:08:53.440419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.879 [2024-07-15 17:08:53.440425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.879 [2024-07-15 17:08:53.440439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-07-15 17:08:53.450424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.879 [2024-07-15 17:08:53.450487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.879 [2024-07-15 17:08:53.450502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.879 [2024-07-15 17:08:53.450509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.879 [2024-07-15 17:08:53.450514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.879 [2024-07-15 17:08:53.450528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-07-15 17:08:53.460388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.879 [2024-07-15 17:08:53.460451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.879 [2024-07-15 17:08:53.460466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.879 [2024-07-15 17:08:53.460472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.879 [2024-07-15 17:08:53.460478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.879 [2024-07-15 17:08:53.460492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-07-15 17:08:53.470429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.879 [2024-07-15 17:08:53.470493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.879 [2024-07-15 17:08:53.470508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.879 [2024-07-15 17:08:53.470515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.879 [2024-07-15 17:08:53.470520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.879 [2024-07-15 17:08:53.470535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-07-15 17:08:53.480473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.879 [2024-07-15 17:08:53.480539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.879 [2024-07-15 17:08:53.480554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.879 [2024-07-15 17:08:53.480561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.879 [2024-07-15 17:08:53.480566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.879 [2024-07-15 17:08:53.480580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-07-15 17:08:53.490459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.879 [2024-07-15 17:08:53.490518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.879 [2024-07-15 17:08:53.490533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.879 [2024-07-15 17:08:53.490540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.879 [2024-07-15 17:08:53.490545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.879 [2024-07-15 17:08:53.490560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-07-15 17:08:53.500448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.879 [2024-07-15 17:08:53.500512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.879 [2024-07-15 17:08:53.500527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.879 [2024-07-15 17:08:53.500536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.879 [2024-07-15 17:08:53.500542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.879 [2024-07-15 17:08:53.500556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-07-15 17:08:53.510460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.879 [2024-07-15 17:08:53.510520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.879 [2024-07-15 17:08:53.510534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.879 [2024-07-15 17:08:53.510541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.879 [2024-07-15 17:08:53.510546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.879 [2024-07-15 17:08:53.510560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-07-15 17:08:53.520536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.879 [2024-07-15 17:08:53.520596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.879 [2024-07-15 17:08:53.520611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.879 [2024-07-15 17:08:53.520617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.879 [2024-07-15 17:08:53.520623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.879 [2024-07-15 17:08:53.520638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-07-15 17:08:53.530579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.879 [2024-07-15 17:08:53.530643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.879 [2024-07-15 17:08:53.530658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.879 [2024-07-15 17:08:53.530664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.879 [2024-07-15 17:08:53.530670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.879 [2024-07-15 17:08:53.530684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-07-15 17:08:53.540548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.879 [2024-07-15 17:08:53.540608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.879 [2024-07-15 17:08:53.540623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.879 [2024-07-15 17:08:53.540629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.879 [2024-07-15 17:08:53.540635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:46.879 [2024-07-15 17:08:53.540649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.879 qpair failed and we were unable to recover it. 00:26:47.139 [2024-07-15 17:08:53.550684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.139 [2024-07-15 17:08:53.550748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.139 [2024-07-15 17:08:53.550762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.139 [2024-07-15 17:08:53.550768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.139 [2024-07-15 17:08:53.550774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.139 [2024-07-15 17:08:53.550789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.139 qpair failed and we were unable to recover it. 00:26:47.139 [2024-07-15 17:08:53.560614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.139 [2024-07-15 17:08:53.560680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.139 [2024-07-15 17:08:53.560694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.139 [2024-07-15 17:08:53.560700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.139 [2024-07-15 17:08:53.560706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.139 [2024-07-15 17:08:53.560720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.139 qpair failed and we were unable to recover it. 00:26:47.139 [2024-07-15 17:08:53.570714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.139 [2024-07-15 17:08:53.570817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.139 [2024-07-15 17:08:53.570831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.139 [2024-07-15 17:08:53.570838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.139 [2024-07-15 17:08:53.570844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.139 [2024-07-15 17:08:53.570860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.139 qpair failed and we were unable to recover it. 00:26:47.139 [2024-07-15 17:08:53.580731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.139 [2024-07-15 17:08:53.580796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.139 [2024-07-15 17:08:53.580810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.139 [2024-07-15 17:08:53.580816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.139 [2024-07-15 17:08:53.580821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.139 [2024-07-15 17:08:53.580836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.139 qpair failed and we were unable to recover it. 00:26:47.139 [2024-07-15 17:08:53.590763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.139 [2024-07-15 17:08:53.590824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.139 [2024-07-15 17:08:53.590841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.139 [2024-07-15 17:08:53.590848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.139 [2024-07-15 17:08:53.590853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.139 [2024-07-15 17:08:53.590867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.139 qpair failed and we were unable to recover it. 00:26:47.139 [2024-07-15 17:08:53.600781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.139 [2024-07-15 17:08:53.600838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.139 [2024-07-15 17:08:53.600852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.139 [2024-07-15 17:08:53.600859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.139 [2024-07-15 17:08:53.600865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.139 [2024-07-15 17:08:53.600878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.139 qpair failed and we were unable to recover it. 00:26:47.139 [2024-07-15 17:08:53.610842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.139 [2024-07-15 17:08:53.610905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.139 [2024-07-15 17:08:53.610919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.139 [2024-07-15 17:08:53.610926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.139 [2024-07-15 17:08:53.610931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.139 [2024-07-15 17:08:53.610945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.139 qpair failed and we were unable to recover it. 00:26:47.139 [2024-07-15 17:08:53.620834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.620903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.620917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.620924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.620930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.620943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.630883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.630952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.630966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.630973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.630978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.630996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.640880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.640945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.640963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.640970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.640976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.640990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.650978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.651039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.651054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.651060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.651066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.651080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.660949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.661009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.661024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.661030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.661035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.661049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.670979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.671042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.671056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.671062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.671068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.671082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.681001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.681065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.681082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.681088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.681094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.681108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.691085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.691146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.691161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.691167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.691173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.691187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.701103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.701163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.701177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.701183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.701189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.701202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.711131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.711190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.711204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.711211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.711216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.711235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.721118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.721232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.721247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.721254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.721262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.721277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.731147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.731209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.731226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.731233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.731239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.731253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.741172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.741239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.741254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.741260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.741266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.741280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.751198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.751265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.751279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.140 [2024-07-15 17:08:53.751286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.140 [2024-07-15 17:08:53.751292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.140 [2024-07-15 17:08:53.751306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.140 qpair failed and we were unable to recover it. 00:26:47.140 [2024-07-15 17:08:53.761229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.140 [2024-07-15 17:08:53.761291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.140 [2024-07-15 17:08:53.761305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.141 [2024-07-15 17:08:53.761311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.141 [2024-07-15 17:08:53.761317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.141 [2024-07-15 17:08:53.761331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.141 qpair failed and we were unable to recover it. 00:26:47.141 [2024-07-15 17:08:53.771266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.141 [2024-07-15 17:08:53.771330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.141 [2024-07-15 17:08:53.771344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.141 [2024-07-15 17:08:53.771351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.141 [2024-07-15 17:08:53.771356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.141 [2024-07-15 17:08:53.771370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.141 qpair failed and we were unable to recover it. 00:26:47.141 [2024-07-15 17:08:53.781306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.141 [2024-07-15 17:08:53.781387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.141 [2024-07-15 17:08:53.781402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.141 [2024-07-15 17:08:53.781408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.141 [2024-07-15 17:08:53.781414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.141 [2024-07-15 17:08:53.781428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.141 qpair failed and we were unable to recover it. 00:26:47.141 [2024-07-15 17:08:53.791322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.141 [2024-07-15 17:08:53.791384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.141 [2024-07-15 17:08:53.791399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.141 [2024-07-15 17:08:53.791405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.141 [2024-07-15 17:08:53.791411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.141 [2024-07-15 17:08:53.791425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.141 qpair failed and we were unable to recover it. 00:26:47.141 [2024-07-15 17:08:53.801358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.141 [2024-07-15 17:08:53.801419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.141 [2024-07-15 17:08:53.801434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.141 [2024-07-15 17:08:53.801440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.141 [2024-07-15 17:08:53.801446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.141 [2024-07-15 17:08:53.801460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.141 qpair failed and we were unable to recover it. 00:26:47.401 [2024-07-15 17:08:53.811346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.401 [2024-07-15 17:08:53.811406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.401 [2024-07-15 17:08:53.811421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.401 [2024-07-15 17:08:53.811428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.401 [2024-07-15 17:08:53.811437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.401 [2024-07-15 17:08:53.811451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.401 qpair failed and we were unable to recover it. 00:26:47.401 [2024-07-15 17:08:53.821422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.401 [2024-07-15 17:08:53.821486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.401 [2024-07-15 17:08:53.821501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.401 [2024-07-15 17:08:53.821508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.401 [2024-07-15 17:08:53.821513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.401 [2024-07-15 17:08:53.821527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.401 qpair failed and we were unable to recover it. 00:26:47.401 [2024-07-15 17:08:53.831511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.401 [2024-07-15 17:08:53.831583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.401 [2024-07-15 17:08:53.831597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.401 [2024-07-15 17:08:53.831604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.401 [2024-07-15 17:08:53.831610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.401 [2024-07-15 17:08:53.831624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.401 qpair failed and we were unable to recover it. 00:26:47.401 [2024-07-15 17:08:53.841447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.401 [2024-07-15 17:08:53.841506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.401 [2024-07-15 17:08:53.841521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.401 [2024-07-15 17:08:53.841527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.401 [2024-07-15 17:08:53.841533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.401 [2024-07-15 17:08:53.841547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.401 qpair failed and we were unable to recover it. 00:26:47.401 [2024-07-15 17:08:53.851508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.401 [2024-07-15 17:08:53.851569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.401 [2024-07-15 17:08:53.851583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.401 [2024-07-15 17:08:53.851590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.401 [2024-07-15 17:08:53.851596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.401 [2024-07-15 17:08:53.851610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.401 qpair failed and we were unable to recover it. 00:26:47.401 [2024-07-15 17:08:53.861469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.401 [2024-07-15 17:08:53.861530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.401 [2024-07-15 17:08:53.861544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.401 [2024-07-15 17:08:53.861551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.401 [2024-07-15 17:08:53.861557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.401 [2024-07-15 17:08:53.861570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.401 qpair failed and we were unable to recover it. 00:26:47.401 [2024-07-15 17:08:53.871604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.401 [2024-07-15 17:08:53.871666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.401 [2024-07-15 17:08:53.871679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.401 [2024-07-15 17:08:53.871686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.401 [2024-07-15 17:08:53.871692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.401 [2024-07-15 17:08:53.871706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.401 qpair failed and we were unable to recover it. 00:26:47.401 [2024-07-15 17:08:53.881585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.401 [2024-07-15 17:08:53.881682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.401 [2024-07-15 17:08:53.881696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.401 [2024-07-15 17:08:53.881703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.401 [2024-07-15 17:08:53.881709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.401 [2024-07-15 17:08:53.881723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.401 qpair failed and we were unable to recover it. 00:26:47.401 [2024-07-15 17:08:53.891625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.401 [2024-07-15 17:08:53.891736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.401 [2024-07-15 17:08:53.891751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.401 [2024-07-15 17:08:53.891757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.401 [2024-07-15 17:08:53.891763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.401 [2024-07-15 17:08:53.891777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.401 qpair failed and we were unable to recover it. 00:26:47.401 [2024-07-15 17:08:53.901625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.401 [2024-07-15 17:08:53.901687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.401 [2024-07-15 17:08:53.901702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.401 [2024-07-15 17:08:53.901711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.401 [2024-07-15 17:08:53.901717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.401 [2024-07-15 17:08:53.901730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.401 qpair failed and we were unable to recover it. 00:26:47.401 [2024-07-15 17:08:53.911615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.401 [2024-07-15 17:08:53.911683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:53.911697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:53.911703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:53.911709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:53.911723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:53.921725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:53.921785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:53.921800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:53.921806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:53.921812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:53.921827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:53.931730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:53.931791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:53.931805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:53.931811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:53.931818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:53.931831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:53.941765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:53.941830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:53.941844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:53.941850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:53.941856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:53.941870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:53.951809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:53.951866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:53.951880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:53.951887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:53.951892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:53.951906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:53.961842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:53.961903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:53.961917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:53.961924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:53.961930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:53.961943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:53.971908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:53.971972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:53.971985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:53.971992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:53.971997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:53.972011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:53.981878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:53.981942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:53.981957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:53.981963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:53.981969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:53.981983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:53.991911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:53.991968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:53.991985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:53.991992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:53.991998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:53.992012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:54.001939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:54.002003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:54.002017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:54.002023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:54.002029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:54.002043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:54.011977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:54.012039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:54.012053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:54.012060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:54.012065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:54.012079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:54.021992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:54.022057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:54.022071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:54.022078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:54.022084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:54.022097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:54.032034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:54.032097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:54.032111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:54.032118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:54.032123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:54.032143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:54.042072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:54.042157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:54.042171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:54.042177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:54.042183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.402 [2024-07-15 17:08:54.042197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.402 qpair failed and we were unable to recover it. 00:26:47.402 [2024-07-15 17:08:54.052079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.402 [2024-07-15 17:08:54.052142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.402 [2024-07-15 17:08:54.052156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.402 [2024-07-15 17:08:54.052163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.402 [2024-07-15 17:08:54.052168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.403 [2024-07-15 17:08:54.052182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.403 qpair failed and we were unable to recover it. 00:26:47.403 [2024-07-15 17:08:54.062139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.403 [2024-07-15 17:08:54.062202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.403 [2024-07-15 17:08:54.062216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.403 [2024-07-15 17:08:54.062223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.403 [2024-07-15 17:08:54.062232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.403 [2024-07-15 17:08:54.062246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.403 qpair failed and we were unable to recover it. 00:26:47.663 [2024-07-15 17:08:54.072134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.663 [2024-07-15 17:08:54.072192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.663 [2024-07-15 17:08:54.072206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.663 [2024-07-15 17:08:54.072213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.663 [2024-07-15 17:08:54.072219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.663 [2024-07-15 17:08:54.072239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.663 qpair failed and we were unable to recover it. 00:26:47.663 [2024-07-15 17:08:54.082169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.663 [2024-07-15 17:08:54.082234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.663 [2024-07-15 17:08:54.082252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.663 [2024-07-15 17:08:54.082259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.663 [2024-07-15 17:08:54.082264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.663 [2024-07-15 17:08:54.082278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.663 qpair failed and we were unable to recover it. 00:26:47.663 [2024-07-15 17:08:54.092215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.663 [2024-07-15 17:08:54.092288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.663 [2024-07-15 17:08:54.092303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.663 [2024-07-15 17:08:54.092309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.663 [2024-07-15 17:08:54.092315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.663 [2024-07-15 17:08:54.092329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.663 qpair failed and we were unable to recover it. 00:26:47.663 [2024-07-15 17:08:54.102234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.663 [2024-07-15 17:08:54.102300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.663 [2024-07-15 17:08:54.102314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.663 [2024-07-15 17:08:54.102321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.663 [2024-07-15 17:08:54.102326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.663 [2024-07-15 17:08:54.102341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.663 qpair failed and we were unable to recover it. 00:26:47.663 [2024-07-15 17:08:54.112260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.663 [2024-07-15 17:08:54.112326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.663 [2024-07-15 17:08:54.112340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.663 [2024-07-15 17:08:54.112347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.663 [2024-07-15 17:08:54.112352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.663 [2024-07-15 17:08:54.112367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.663 qpair failed and we were unable to recover it. 00:26:47.663 [2024-07-15 17:08:54.122290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.663 [2024-07-15 17:08:54.122354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.663 [2024-07-15 17:08:54.122368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.663 [2024-07-15 17:08:54.122375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.663 [2024-07-15 17:08:54.122381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.663 [2024-07-15 17:08:54.122398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.663 qpair failed and we were unable to recover it. 00:26:47.663 [2024-07-15 17:08:54.132358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.663 [2024-07-15 17:08:54.132438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.663 [2024-07-15 17:08:54.132452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.663 [2024-07-15 17:08:54.132459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.663 [2024-07-15 17:08:54.132464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.663 [2024-07-15 17:08:54.132478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.663 qpair failed and we were unable to recover it. 00:26:47.663 [2024-07-15 17:08:54.142349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.663 [2024-07-15 17:08:54.142413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.663 [2024-07-15 17:08:54.142428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.663 [2024-07-15 17:08:54.142435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.663 [2024-07-15 17:08:54.142440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.663 [2024-07-15 17:08:54.142454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.663 qpair failed and we were unable to recover it. 00:26:47.663 [2024-07-15 17:08:54.152382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.663 [2024-07-15 17:08:54.152446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.663 [2024-07-15 17:08:54.152460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.663 [2024-07-15 17:08:54.152466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.663 [2024-07-15 17:08:54.152472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.663 [2024-07-15 17:08:54.152486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.663 qpair failed and we were unable to recover it. 00:26:47.663 [2024-07-15 17:08:54.162409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.663 [2024-07-15 17:08:54.162470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.663 [2024-07-15 17:08:54.162484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.663 [2024-07-15 17:08:54.162490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.663 [2024-07-15 17:08:54.162496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.663 [2024-07-15 17:08:54.162510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.663 qpair failed and we were unable to recover it. 00:26:47.663 [2024-07-15 17:08:54.172431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.663 [2024-07-15 17:08:54.172499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.663 [2024-07-15 17:08:54.172513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.663 [2024-07-15 17:08:54.172520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.663 [2024-07-15 17:08:54.172526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.663 [2024-07-15 17:08:54.172540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.663 qpair failed and we were unable to recover it. 00:26:47.663 [2024-07-15 17:08:54.182445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.663 [2024-07-15 17:08:54.182507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.663 [2024-07-15 17:08:54.182522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.663 [2024-07-15 17:08:54.182528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.663 [2024-07-15 17:08:54.182534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.663 [2024-07-15 17:08:54.182547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.663 qpair failed and we were unable to recover it. 00:26:47.663 [2024-07-15 17:08:54.192542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.663 [2024-07-15 17:08:54.192605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.663 [2024-07-15 17:08:54.192619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.192625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.192631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.192645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.664 [2024-07-15 17:08:54.202522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.664 [2024-07-15 17:08:54.202578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.664 [2024-07-15 17:08:54.202594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.202600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.202606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.202620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.664 [2024-07-15 17:08:54.212551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.664 [2024-07-15 17:08:54.212612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.664 [2024-07-15 17:08:54.212627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.212633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.212642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.212657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.664 [2024-07-15 17:08:54.222574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.664 [2024-07-15 17:08:54.222636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.664 [2024-07-15 17:08:54.222650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.222657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.222663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.222677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.664 [2024-07-15 17:08:54.232621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.664 [2024-07-15 17:08:54.232680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.664 [2024-07-15 17:08:54.232694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.232700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.232706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.232720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.664 [2024-07-15 17:08:54.242628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.664 [2024-07-15 17:08:54.242685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.664 [2024-07-15 17:08:54.242698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.242705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.242711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.242725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.664 [2024-07-15 17:08:54.252653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.664 [2024-07-15 17:08:54.252712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.664 [2024-07-15 17:08:54.252727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.252733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.252740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.252755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.664 [2024-07-15 17:08:54.262743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.664 [2024-07-15 17:08:54.262810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.664 [2024-07-15 17:08:54.262826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.262832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.262838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.262852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.664 [2024-07-15 17:08:54.272758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.664 [2024-07-15 17:08:54.272847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.664 [2024-07-15 17:08:54.272861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.272867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.272873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.272888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.664 [2024-07-15 17:08:54.282735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.664 [2024-07-15 17:08:54.282798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.664 [2024-07-15 17:08:54.282812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.282818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.282824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.282838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.664 [2024-07-15 17:08:54.292767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.664 [2024-07-15 17:08:54.292827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.664 [2024-07-15 17:08:54.292841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.292847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.292853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.292867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.664 [2024-07-15 17:08:54.302795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.664 [2024-07-15 17:08:54.302862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.664 [2024-07-15 17:08:54.302875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.302885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.302891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.302905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.664 [2024-07-15 17:08:54.312760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.664 [2024-07-15 17:08:54.312826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.664 [2024-07-15 17:08:54.312841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.312848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.312854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.312869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.664 [2024-07-15 17:08:54.322845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.664 [2024-07-15 17:08:54.322905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.664 [2024-07-15 17:08:54.322920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.664 [2024-07-15 17:08:54.322927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.664 [2024-07-15 17:08:54.322932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.664 [2024-07-15 17:08:54.322947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.664 qpair failed and we were unable to recover it. 00:26:47.924 [2024-07-15 17:08:54.332824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.924 [2024-07-15 17:08:54.332890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.924 [2024-07-15 17:08:54.332906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.924 [2024-07-15 17:08:54.332913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.924 [2024-07-15 17:08:54.332920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.924 [2024-07-15 17:08:54.332934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.924 qpair failed and we were unable to recover it. 00:26:47.924 [2024-07-15 17:08:54.342855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.924 [2024-07-15 17:08:54.342920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.924 [2024-07-15 17:08:54.342935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.924 [2024-07-15 17:08:54.342941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.924 [2024-07-15 17:08:54.342947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.924 [2024-07-15 17:08:54.342961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.924 qpair failed and we were unable to recover it. 00:26:47.924 [2024-07-15 17:08:54.352963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.924 [2024-07-15 17:08:54.353026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.924 [2024-07-15 17:08:54.353041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.924 [2024-07-15 17:08:54.353047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.924 [2024-07-15 17:08:54.353053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.924 [2024-07-15 17:08:54.353067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.924 qpair failed and we were unable to recover it. 00:26:47.924 [2024-07-15 17:08:54.362961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.924 [2024-07-15 17:08:54.363022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.924 [2024-07-15 17:08:54.363037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.924 [2024-07-15 17:08:54.363043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.924 [2024-07-15 17:08:54.363049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.924 [2024-07-15 17:08:54.363063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.924 qpair failed and we were unable to recover it. 00:26:47.924 [2024-07-15 17:08:54.372994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.924 [2024-07-15 17:08:54.373058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.924 [2024-07-15 17:08:54.373072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.924 [2024-07-15 17:08:54.373078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.924 [2024-07-15 17:08:54.373084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.924 [2024-07-15 17:08:54.373099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.924 qpair failed and we were unable to recover it. 00:26:47.924 [2024-07-15 17:08:54.383007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.924 [2024-07-15 17:08:54.383071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.924 [2024-07-15 17:08:54.383085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.924 [2024-07-15 17:08:54.383092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.924 [2024-07-15 17:08:54.383097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.924 [2024-07-15 17:08:54.383112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.924 qpair failed and we were unable to recover it. 00:26:47.924 [2024-07-15 17:08:54.393006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.924 [2024-07-15 17:08:54.393066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.924 [2024-07-15 17:08:54.393081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.924 [2024-07-15 17:08:54.393091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.924 [2024-07-15 17:08:54.393097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.924 [2024-07-15 17:08:54.393111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.924 qpair failed and we were unable to recover it. 00:26:47.924 [2024-07-15 17:08:54.403063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.924 [2024-07-15 17:08:54.403124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.924 [2024-07-15 17:08:54.403138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.924 [2024-07-15 17:08:54.403145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.924 [2024-07-15 17:08:54.403150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.924 [2024-07-15 17:08:54.403165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.924 qpair failed and we were unable to recover it. 00:26:47.924 [2024-07-15 17:08:54.413137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.924 [2024-07-15 17:08:54.413200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.924 [2024-07-15 17:08:54.413215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.924 [2024-07-15 17:08:54.413222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.924 [2024-07-15 17:08:54.413231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.924 [2024-07-15 17:08:54.413245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.924 qpair failed and we were unable to recover it. 00:26:47.924 [2024-07-15 17:08:54.423075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.925 [2024-07-15 17:08:54.423138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.925 [2024-07-15 17:08:54.423153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.925 [2024-07-15 17:08:54.423159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.925 [2024-07-15 17:08:54.423165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.925 [2024-07-15 17:08:54.423179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.925 qpair failed and we were unable to recover it. 00:26:47.925 [2024-07-15 17:08:54.433163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.925 [2024-07-15 17:08:54.433222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.925 [2024-07-15 17:08:54.433239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.925 [2024-07-15 17:08:54.433246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.925 [2024-07-15 17:08:54.433252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.925 [2024-07-15 17:08:54.433267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.925 qpair failed and we were unable to recover it. 00:26:47.925 [2024-07-15 17:08:54.443198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.925 [2024-07-15 17:08:54.443263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.925 [2024-07-15 17:08:54.443277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.925 [2024-07-15 17:08:54.443284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.925 [2024-07-15 17:08:54.443289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.925 [2024-07-15 17:08:54.443304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.925 qpair failed and we were unable to recover it. 00:26:47.925 [2024-07-15 17:08:54.453237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.925 [2024-07-15 17:08:54.453302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.925 [2024-07-15 17:08:54.453317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.925 [2024-07-15 17:08:54.453324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.925 [2024-07-15 17:08:54.453329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.925 [2024-07-15 17:08:54.453344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.925 qpair failed and we were unable to recover it. 00:26:47.925 [2024-07-15 17:08:54.463256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.925 [2024-07-15 17:08:54.463319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.925 [2024-07-15 17:08:54.463333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.925 [2024-07-15 17:08:54.463340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.925 [2024-07-15 17:08:54.463346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.925 [2024-07-15 17:08:54.463360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.925 qpair failed and we were unable to recover it. 00:26:47.925 [2024-07-15 17:08:54.473295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.925 [2024-07-15 17:08:54.473356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.925 [2024-07-15 17:08:54.473370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.925 [2024-07-15 17:08:54.473377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.925 [2024-07-15 17:08:54.473382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.925 [2024-07-15 17:08:54.473396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.925 qpair failed and we were unable to recover it. 00:26:47.925 [2024-07-15 17:08:54.483334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.925 [2024-07-15 17:08:54.483393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.925 [2024-07-15 17:08:54.483410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.925 [2024-07-15 17:08:54.483417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.925 [2024-07-15 17:08:54.483423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.925 [2024-07-15 17:08:54.483437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.925 qpair failed and we were unable to recover it. 00:26:47.925 [2024-07-15 17:08:54.493351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.925 [2024-07-15 17:08:54.493415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.925 [2024-07-15 17:08:54.493430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.925 [2024-07-15 17:08:54.493436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.925 [2024-07-15 17:08:54.493442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.925 [2024-07-15 17:08:54.493456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.925 qpair failed and we were unable to recover it. 00:26:47.925 [2024-07-15 17:08:54.503383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.925 [2024-07-15 17:08:54.503446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.925 [2024-07-15 17:08:54.503461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.925 [2024-07-15 17:08:54.503467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.925 [2024-07-15 17:08:54.503473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.925 [2024-07-15 17:08:54.503487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.925 qpair failed and we were unable to recover it. 00:26:47.925 [2024-07-15 17:08:54.513408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.925 [2024-07-15 17:08:54.513469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.925 [2024-07-15 17:08:54.513485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.925 [2024-07-15 17:08:54.513492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.925 [2024-07-15 17:08:54.513498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.925 [2024-07-15 17:08:54.513512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.925 qpair failed and we were unable to recover it. 00:26:47.925 [2024-07-15 17:08:54.523428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.925 [2024-07-15 17:08:54.523491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.925 [2024-07-15 17:08:54.523505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.925 [2024-07-15 17:08:54.523511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.925 [2024-07-15 17:08:54.523517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.925 [2024-07-15 17:08:54.523534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.925 qpair failed and we were unable to recover it. 00:26:47.925 [2024-07-15 17:08:54.533466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.925 [2024-07-15 17:08:54.533527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.926 [2024-07-15 17:08:54.533541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.926 [2024-07-15 17:08:54.533547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.926 [2024-07-15 17:08:54.533553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.926 [2024-07-15 17:08:54.533568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.926 qpair failed and we were unable to recover it. 00:26:47.926 [2024-07-15 17:08:54.543481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.926 [2024-07-15 17:08:54.543543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.926 [2024-07-15 17:08:54.543558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.926 [2024-07-15 17:08:54.543564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.926 [2024-07-15 17:08:54.543570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.926 [2024-07-15 17:08:54.543584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.926 qpair failed and we were unable to recover it. 00:26:47.926 [2024-07-15 17:08:54.553575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.926 [2024-07-15 17:08:54.553638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.926 [2024-07-15 17:08:54.553652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.926 [2024-07-15 17:08:54.553658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.926 [2024-07-15 17:08:54.553664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.926 [2024-07-15 17:08:54.553679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.926 qpair failed and we were unable to recover it. 00:26:47.926 [2024-07-15 17:08:54.563545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.926 [2024-07-15 17:08:54.563609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.926 [2024-07-15 17:08:54.563623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.926 [2024-07-15 17:08:54.563629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.926 [2024-07-15 17:08:54.563635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.926 [2024-07-15 17:08:54.563649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.926 qpair failed and we were unable to recover it. 00:26:47.926 [2024-07-15 17:08:54.573572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.926 [2024-07-15 17:08:54.573634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.926 [2024-07-15 17:08:54.573651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.926 [2024-07-15 17:08:54.573657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.926 [2024-07-15 17:08:54.573663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.926 [2024-07-15 17:08:54.573677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.926 qpair failed and we were unable to recover it. 00:26:47.926 [2024-07-15 17:08:54.583606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.926 [2024-07-15 17:08:54.583671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.926 [2024-07-15 17:08:54.583685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.926 [2024-07-15 17:08:54.583692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.926 [2024-07-15 17:08:54.583697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:47.926 [2024-07-15 17:08:54.583711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.926 qpair failed and we were unable to recover it. 00:26:48.187 [2024-07-15 17:08:54.593627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.187 [2024-07-15 17:08:54.593685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.187 [2024-07-15 17:08:54.593699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.187 [2024-07-15 17:08:54.593706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.187 [2024-07-15 17:08:54.593712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.187 [2024-07-15 17:08:54.593727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-07-15 17:08:54.603707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.187 [2024-07-15 17:08:54.603768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.187 [2024-07-15 17:08:54.603782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.187 [2024-07-15 17:08:54.603789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.187 [2024-07-15 17:08:54.603795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.187 [2024-07-15 17:08:54.603809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-07-15 17:08:54.613667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.187 [2024-07-15 17:08:54.613727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.187 [2024-07-15 17:08:54.613743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.187 [2024-07-15 17:08:54.613749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.187 [2024-07-15 17:08:54.613853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.187 [2024-07-15 17:08:54.613868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-07-15 17:08:54.623814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.187 [2024-07-15 17:08:54.623876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.187 [2024-07-15 17:08:54.623891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.187 [2024-07-15 17:08:54.623898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.187 [2024-07-15 17:08:54.623904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.187 [2024-07-15 17:08:54.623918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-07-15 17:08:54.633680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.187 [2024-07-15 17:08:54.633747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.187 [2024-07-15 17:08:54.633761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.187 [2024-07-15 17:08:54.633768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.187 [2024-07-15 17:08:54.633774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.187 [2024-07-15 17:08:54.633788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-07-15 17:08:54.643766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.187 [2024-07-15 17:08:54.643826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.187 [2024-07-15 17:08:54.643840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.187 [2024-07-15 17:08:54.643847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.187 [2024-07-15 17:08:54.643852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.187 [2024-07-15 17:08:54.643867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-07-15 17:08:54.653795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.187 [2024-07-15 17:08:54.653857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.187 [2024-07-15 17:08:54.653871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.187 [2024-07-15 17:08:54.653878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.187 [2024-07-15 17:08:54.653884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.187 [2024-07-15 17:08:54.653898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-07-15 17:08:54.663906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.187 [2024-07-15 17:08:54.663994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.187 [2024-07-15 17:08:54.664009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.187 [2024-07-15 17:08:54.664016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.187 [2024-07-15 17:08:54.664022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.187 [2024-07-15 17:08:54.664036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-07-15 17:08:54.673782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.187 [2024-07-15 17:08:54.673846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.187 [2024-07-15 17:08:54.673860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.187 [2024-07-15 17:08:54.673866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.187 [2024-07-15 17:08:54.673872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.187 [2024-07-15 17:08:54.673886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-07-15 17:08:54.683912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.187 [2024-07-15 17:08:54.684000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.187 [2024-07-15 17:08:54.684014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.187 [2024-07-15 17:08:54.684021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.187 [2024-07-15 17:08:54.684026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.187 [2024-07-15 17:08:54.684040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-07-15 17:08:54.693971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.187 [2024-07-15 17:08:54.694040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.187 [2024-07-15 17:08:54.694054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.187 [2024-07-15 17:08:54.694060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.694066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.694081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.703877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.703943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.703957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.703967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.703972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.703986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.713914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.713995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.714010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.714016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.714022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.714037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.724044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.724105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.724119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.724126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.724131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.724146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.734078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.734155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.734169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.734176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.734181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.734196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.744040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.744097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.744113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.744119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.744125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.744140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.754128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.754193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.754207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.754214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.754219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.754237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.764125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.764186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.764201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.764207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.764213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.764231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.774127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.774191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.774206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.774213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.774219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.774236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.784094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.784161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.784175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.784182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.784187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.784201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.794191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.794277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.794291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.794301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.794306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.794321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.804221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.804291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.804306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.804312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.804318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.804332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.814268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.814333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.814348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.814355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.814361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.814376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.824319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.824383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.824399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.824406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.824412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.824427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-07-15 17:08:54.834308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.188 [2024-07-15 17:08:54.834372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.188 [2024-07-15 17:08:54.834387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.188 [2024-07-15 17:08:54.834393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.188 [2024-07-15 17:08:54.834399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.188 [2024-07-15 17:08:54.834414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.189 [2024-07-15 17:08:54.844357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.189 [2024-07-15 17:08:54.844414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.189 [2024-07-15 17:08:54.844429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.189 [2024-07-15 17:08:54.844435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.189 [2024-07-15 17:08:54.844441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.189 [2024-07-15 17:08:54.844455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.450 [2024-07-15 17:08:54.854332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.450 [2024-07-15 17:08:54.854393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.450 [2024-07-15 17:08:54.854408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.450 [2024-07-15 17:08:54.854415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.450 [2024-07-15 17:08:54.854421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.450 [2024-07-15 17:08:54.854435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.450 qpair failed and we were unable to recover it. 00:26:48.450 [2024-07-15 17:08:54.864433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.450 [2024-07-15 17:08:54.864498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.450 [2024-07-15 17:08:54.864514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.450 [2024-07-15 17:08:54.864520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.450 [2024-07-15 17:08:54.864525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.450 [2024-07-15 17:08:54.864540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.450 qpair failed and we were unable to recover it. 00:26:48.450 [2024-07-15 17:08:54.874383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.450 [2024-07-15 17:08:54.874447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.450 [2024-07-15 17:08:54.874461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.450 [2024-07-15 17:08:54.874468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.450 [2024-07-15 17:08:54.874474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.450 [2024-07-15 17:08:54.874488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.450 qpair failed and we were unable to recover it. 00:26:48.450 [2024-07-15 17:08:54.884520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.450 [2024-07-15 17:08:54.884584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.450 [2024-07-15 17:08:54.884603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.450 [2024-07-15 17:08:54.884610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.450 [2024-07-15 17:08:54.884616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.450 [2024-07-15 17:08:54.884631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.450 qpair failed and we were unable to recover it. 00:26:48.450 [2024-07-15 17:08:54.894455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.450 [2024-07-15 17:08:54.894516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.450 [2024-07-15 17:08:54.894530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.450 [2024-07-15 17:08:54.894537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.450 [2024-07-15 17:08:54.894542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.450 [2024-07-15 17:08:54.894557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.450 qpair failed and we were unable to recover it. 00:26:48.450 [2024-07-15 17:08:54.904498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.450 [2024-07-15 17:08:54.904564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.450 [2024-07-15 17:08:54.904578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.450 [2024-07-15 17:08:54.904585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.450 [2024-07-15 17:08:54.904590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.450 [2024-07-15 17:08:54.904604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.450 qpair failed and we were unable to recover it. 00:26:48.450 [2024-07-15 17:08:54.914582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.450 [2024-07-15 17:08:54.914643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.450 [2024-07-15 17:08:54.914658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.450 [2024-07-15 17:08:54.914664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.450 [2024-07-15 17:08:54.914669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.450 [2024-07-15 17:08:54.914684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.450 qpair failed and we were unable to recover it. 00:26:48.450 [2024-07-15 17:08:54.924505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.450 [2024-07-15 17:08:54.924567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.450 [2024-07-15 17:08:54.924582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.450 [2024-07-15 17:08:54.924588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.450 [2024-07-15 17:08:54.924594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.450 [2024-07-15 17:08:54.924610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.450 qpair failed and we were unable to recover it. 00:26:48.450 [2024-07-15 17:08:54.934643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.450 [2024-07-15 17:08:54.934705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.450 [2024-07-15 17:08:54.934719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.450 [2024-07-15 17:08:54.934725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.450 [2024-07-15 17:08:54.934731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.450 [2024-07-15 17:08:54.934745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.450 qpair failed and we were unable to recover it. 00:26:48.450 [2024-07-15 17:08:54.944564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.450 [2024-07-15 17:08:54.944625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.450 [2024-07-15 17:08:54.944640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.450 [2024-07-15 17:08:54.944646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.450 [2024-07-15 17:08:54.944652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.450 [2024-07-15 17:08:54.944666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.450 qpair failed and we were unable to recover it. 00:26:48.450 [2024-07-15 17:08:54.954600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.450 [2024-07-15 17:08:54.954665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.450 [2024-07-15 17:08:54.954679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:54.954686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.451 [2024-07-15 17:08:54.954692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.451 [2024-07-15 17:08:54.954706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.451 qpair failed and we were unable to recover it. 00:26:48.451 [2024-07-15 17:08:54.964726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.451 [2024-07-15 17:08:54.964787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.451 [2024-07-15 17:08:54.964802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:54.964808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.451 [2024-07-15 17:08:54.964814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.451 [2024-07-15 17:08:54.964828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.451 qpair failed and we were unable to recover it. 00:26:48.451 [2024-07-15 17:08:54.974762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.451 [2024-07-15 17:08:54.974823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.451 [2024-07-15 17:08:54.974841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:54.974847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.451 [2024-07-15 17:08:54.974853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.451 [2024-07-15 17:08:54.974867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.451 qpair failed and we were unable to recover it. 00:26:48.451 [2024-07-15 17:08:54.984747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.451 [2024-07-15 17:08:54.984809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.451 [2024-07-15 17:08:54.984824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:54.984831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.451 [2024-07-15 17:08:54.984837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.451 [2024-07-15 17:08:54.984850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.451 qpair failed and we were unable to recover it. 00:26:48.451 [2024-07-15 17:08:54.994760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.451 [2024-07-15 17:08:54.994821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.451 [2024-07-15 17:08:54.994836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:54.994842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.451 [2024-07-15 17:08:54.994848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.451 [2024-07-15 17:08:54.994862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.451 qpair failed and we were unable to recover it. 00:26:48.451 [2024-07-15 17:08:55.004739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.451 [2024-07-15 17:08:55.004798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.451 [2024-07-15 17:08:55.004813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:55.004820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.451 [2024-07-15 17:08:55.004826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.451 [2024-07-15 17:08:55.004840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.451 qpair failed and we were unable to recover it. 00:26:48.451 [2024-07-15 17:08:55.014772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.451 [2024-07-15 17:08:55.014838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.451 [2024-07-15 17:08:55.014852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:55.014858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.451 [2024-07-15 17:08:55.014867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.451 [2024-07-15 17:08:55.014881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.451 qpair failed and we were unable to recover it. 00:26:48.451 [2024-07-15 17:08:55.024831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.451 [2024-07-15 17:08:55.024896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.451 [2024-07-15 17:08:55.024910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:55.024917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.451 [2024-07-15 17:08:55.024922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.451 [2024-07-15 17:08:55.024936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.451 qpair failed and we were unable to recover it. 00:26:48.451 [2024-07-15 17:08:55.034942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.451 [2024-07-15 17:08:55.035004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.451 [2024-07-15 17:08:55.035018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:55.035024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.451 [2024-07-15 17:08:55.035030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.451 [2024-07-15 17:08:55.035044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.451 qpair failed and we were unable to recover it. 00:26:48.451 [2024-07-15 17:08:55.044899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.451 [2024-07-15 17:08:55.044962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.451 [2024-07-15 17:08:55.044976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:55.044983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.451 [2024-07-15 17:08:55.044988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.451 [2024-07-15 17:08:55.045002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.451 qpair failed and we were unable to recover it. 00:26:48.451 [2024-07-15 17:08:55.054979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.451 [2024-07-15 17:08:55.055043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.451 [2024-07-15 17:08:55.055057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:55.055064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.451 [2024-07-15 17:08:55.055069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.451 [2024-07-15 17:08:55.055083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.451 qpair failed and we were unable to recover it. 00:26:48.451 [2024-07-15 17:08:55.064954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.451 [2024-07-15 17:08:55.065020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.451 [2024-07-15 17:08:55.065035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:55.065041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.451 [2024-07-15 17:08:55.065047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.451 [2024-07-15 17:08:55.065061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.451 qpair failed and we were unable to recover it. 00:26:48.451 [2024-07-15 17:08:55.075000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.451 [2024-07-15 17:08:55.075066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.451 [2024-07-15 17:08:55.075080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:55.075086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.451 [2024-07-15 17:08:55.075092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.451 [2024-07-15 17:08:55.075106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.451 qpair failed and we were unable to recover it. 00:26:48.451 [2024-07-15 17:08:55.085002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.451 [2024-07-15 17:08:55.085070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.451 [2024-07-15 17:08:55.085084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.451 [2024-07-15 17:08:55.085090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.452 [2024-07-15 17:08:55.085096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.452 [2024-07-15 17:08:55.085110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.452 qpair failed and we were unable to recover it. 00:26:48.452 [2024-07-15 17:08:55.095041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.452 [2024-07-15 17:08:55.095105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.452 [2024-07-15 17:08:55.095120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.452 [2024-07-15 17:08:55.095126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.452 [2024-07-15 17:08:55.095132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.452 [2024-07-15 17:08:55.095146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.452 qpair failed and we were unable to recover it. 00:26:48.452 [2024-07-15 17:08:55.105052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.452 [2024-07-15 17:08:55.105115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.452 [2024-07-15 17:08:55.105129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.452 [2024-07-15 17:08:55.105136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.452 [2024-07-15 17:08:55.105144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.452 [2024-07-15 17:08:55.105158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.452 qpair failed and we were unable to recover it. 00:26:48.452 [2024-07-15 17:08:55.115096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.452 [2024-07-15 17:08:55.115153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.452 [2024-07-15 17:08:55.115167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.452 [2024-07-15 17:08:55.115174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.452 [2024-07-15 17:08:55.115179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.452 [2024-07-15 17:08:55.115194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.452 qpair failed and we were unable to recover it. 00:26:48.712 [2024-07-15 17:08:55.125131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.712 [2024-07-15 17:08:55.125194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.712 [2024-07-15 17:08:55.125208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.712 [2024-07-15 17:08:55.125215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.712 [2024-07-15 17:08:55.125221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.712 [2024-07-15 17:08:55.125239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.712 qpair failed and we were unable to recover it. 00:26:48.712 [2024-07-15 17:08:55.135106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.712 [2024-07-15 17:08:55.135168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.712 [2024-07-15 17:08:55.135182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.712 [2024-07-15 17:08:55.135189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.712 [2024-07-15 17:08:55.135195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.712 [2024-07-15 17:08:55.135209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.712 qpair failed and we were unable to recover it. 00:26:48.712 [2024-07-15 17:08:55.145173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.712 [2024-07-15 17:08:55.145244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.145259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.145266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.145272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.145285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.155209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.155274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.155289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.155295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.155301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.155315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.165277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.165342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.165356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.165362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.165368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.165382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.175275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.175337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.175351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.175358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.175363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.175378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.185233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.185291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.185305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.185312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.185318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.185332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.195324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.195382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.195396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.195406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.195412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.195426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.205440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.205524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.205539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.205546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.205552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.205566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.215391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.215457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.215471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.215477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.215483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.215497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.225428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.225489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.225503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.225510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.225515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.225529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.235464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.235526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.235539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.235546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.235551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.235566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.245467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.245529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.245543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.245550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.245555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.245569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.255491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.255552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.255568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.255575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.255581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.255595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.265541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.265602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.265616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.265623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.265628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.265643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.275571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.275641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.275655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.275661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.713 [2024-07-15 17:08:55.275667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.713 [2024-07-15 17:08:55.275681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.713 qpair failed and we were unable to recover it. 00:26:48.713 [2024-07-15 17:08:55.285599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.713 [2024-07-15 17:08:55.285659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.713 [2024-07-15 17:08:55.285676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.713 [2024-07-15 17:08:55.285682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.714 [2024-07-15 17:08:55.285688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.714 [2024-07-15 17:08:55.285702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.714 qpair failed and we were unable to recover it. 00:26:48.714 [2024-07-15 17:08:55.295618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.714 [2024-07-15 17:08:55.295682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.714 [2024-07-15 17:08:55.295696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.714 [2024-07-15 17:08:55.295703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.714 [2024-07-15 17:08:55.295709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.714 [2024-07-15 17:08:55.295723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.714 qpair failed and we were unable to recover it. 00:26:48.714 [2024-07-15 17:08:55.305631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.714 [2024-07-15 17:08:55.305697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.714 [2024-07-15 17:08:55.305711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.714 [2024-07-15 17:08:55.305718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.714 [2024-07-15 17:08:55.305723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.714 [2024-07-15 17:08:55.305737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.714 qpair failed and we were unable to recover it. 00:26:48.714 [2024-07-15 17:08:55.315664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.714 [2024-07-15 17:08:55.315724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.714 [2024-07-15 17:08:55.315738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.714 [2024-07-15 17:08:55.315744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.714 [2024-07-15 17:08:55.315750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.714 [2024-07-15 17:08:55.315764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.714 qpair failed and we were unable to recover it. 00:26:48.714 [2024-07-15 17:08:55.325745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.714 [2024-07-15 17:08:55.325807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.714 [2024-07-15 17:08:55.325822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.714 [2024-07-15 17:08:55.325828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.714 [2024-07-15 17:08:55.325834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.714 [2024-07-15 17:08:55.325851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.714 qpair failed and we were unable to recover it. 00:26:48.714 [2024-07-15 17:08:55.335738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.714 [2024-07-15 17:08:55.335800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.714 [2024-07-15 17:08:55.335814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.714 [2024-07-15 17:08:55.335821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.714 [2024-07-15 17:08:55.335827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.714 [2024-07-15 17:08:55.335841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.714 qpair failed and we were unable to recover it. 00:26:48.714 [2024-07-15 17:08:55.345751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.714 [2024-07-15 17:08:55.345815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.714 [2024-07-15 17:08:55.345829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.714 [2024-07-15 17:08:55.345836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.714 [2024-07-15 17:08:55.345842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.714 [2024-07-15 17:08:55.345856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.714 qpair failed and we were unable to recover it. 00:26:48.714 [2024-07-15 17:08:55.355836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.714 [2024-07-15 17:08:55.355901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.714 [2024-07-15 17:08:55.355917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.714 [2024-07-15 17:08:55.355925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.714 [2024-07-15 17:08:55.355934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.714 [2024-07-15 17:08:55.355950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.714 qpair failed and we were unable to recover it. 00:26:48.714 [2024-07-15 17:08:55.365767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.714 [2024-07-15 17:08:55.365832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.714 [2024-07-15 17:08:55.365848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.714 [2024-07-15 17:08:55.365854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.714 [2024-07-15 17:08:55.365861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.714 [2024-07-15 17:08:55.365874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.714 qpair failed and we were unable to recover it. 00:26:48.714 [2024-07-15 17:08:55.375804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.714 [2024-07-15 17:08:55.375871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.714 [2024-07-15 17:08:55.375889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.714 [2024-07-15 17:08:55.375896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.714 [2024-07-15 17:08:55.375902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.714 [2024-07-15 17:08:55.375916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.714 qpair failed and we were unable to recover it. 00:26:48.976 [2024-07-15 17:08:55.385920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.976 [2024-07-15 17:08:55.385984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.976 [2024-07-15 17:08:55.385999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.976 [2024-07-15 17:08:55.386005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.976 [2024-07-15 17:08:55.386011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.976 [2024-07-15 17:08:55.386025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.976 qpair failed and we were unable to recover it. 00:26:48.976 [2024-07-15 17:08:55.395928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.976 [2024-07-15 17:08:55.395989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.976 [2024-07-15 17:08:55.396003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.976 [2024-07-15 17:08:55.396010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.976 [2024-07-15 17:08:55.396016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.976 [2024-07-15 17:08:55.396031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.976 qpair failed and we were unable to recover it. 00:26:48.976 [2024-07-15 17:08:55.405952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.976 [2024-07-15 17:08:55.406008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.976 [2024-07-15 17:08:55.406023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.976 [2024-07-15 17:08:55.406030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.976 [2024-07-15 17:08:55.406036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.976 [2024-07-15 17:08:55.406050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.976 qpair failed and we were unable to recover it. 00:26:48.976 [2024-07-15 17:08:55.415967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.976 [2024-07-15 17:08:55.416028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.976 [2024-07-15 17:08:55.416042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.976 [2024-07-15 17:08:55.416048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.976 [2024-07-15 17:08:55.416057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.976 [2024-07-15 17:08:55.416072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.976 qpair failed and we were unable to recover it. 00:26:48.976 [2024-07-15 17:08:55.425998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.976 [2024-07-15 17:08:55.426062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.976 [2024-07-15 17:08:55.426077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.976 [2024-07-15 17:08:55.426084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.976 [2024-07-15 17:08:55.426089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.976 [2024-07-15 17:08:55.426103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.976 qpair failed and we were unable to recover it. 00:26:48.976 [2024-07-15 17:08:55.436053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.976 [2024-07-15 17:08:55.436111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.976 [2024-07-15 17:08:55.436125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.976 [2024-07-15 17:08:55.436132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.976 [2024-07-15 17:08:55.436138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.976 [2024-07-15 17:08:55.436152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.976 qpair failed and we were unable to recover it. 00:26:48.976 [2024-07-15 17:08:55.446049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.976 [2024-07-15 17:08:55.446110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.976 [2024-07-15 17:08:55.446125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.976 [2024-07-15 17:08:55.446131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.976 [2024-07-15 17:08:55.446137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.976 [2024-07-15 17:08:55.446151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.976 qpair failed and we were unable to recover it. 00:26:48.976 [2024-07-15 17:08:55.456088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.976 [2024-07-15 17:08:55.456150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.976 [2024-07-15 17:08:55.456164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.976 [2024-07-15 17:08:55.456171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.976 [2024-07-15 17:08:55.456177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.976 [2024-07-15 17:08:55.456191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.976 qpair failed and we were unable to recover it. 00:26:48.976 [2024-07-15 17:08:55.466107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.976 [2024-07-15 17:08:55.466173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.976 [2024-07-15 17:08:55.466188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.976 [2024-07-15 17:08:55.466195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.976 [2024-07-15 17:08:55.466200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.976 [2024-07-15 17:08:55.466214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.976 qpair failed and we were unable to recover it. 00:26:48.976 [2024-07-15 17:08:55.476152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.976 [2024-07-15 17:08:55.476215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.976 [2024-07-15 17:08:55.476233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.976 [2024-07-15 17:08:55.476240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.976 [2024-07-15 17:08:55.476246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.976 [2024-07-15 17:08:55.476261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.976 qpair failed and we were unable to recover it. 00:26:48.976 [2024-07-15 17:08:55.486223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.976 [2024-07-15 17:08:55.486340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.976 [2024-07-15 17:08:55.486356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.486363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.486368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.977 [2024-07-15 17:08:55.486383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.496190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.977 [2024-07-15 17:08:55.496257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.977 [2024-07-15 17:08:55.496271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.496278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.496284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.977 [2024-07-15 17:08:55.496298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.506239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.977 [2024-07-15 17:08:55.506300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.977 [2024-07-15 17:08:55.506314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.506320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.506329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.977 [2024-07-15 17:08:55.506343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.516314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.977 [2024-07-15 17:08:55.516378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.977 [2024-07-15 17:08:55.516392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.516399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.516404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.977 [2024-07-15 17:08:55.516418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.526307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.977 [2024-07-15 17:08:55.526367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.977 [2024-07-15 17:08:55.526382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.526388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.526394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.977 [2024-07-15 17:08:55.526408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.536321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.977 [2024-07-15 17:08:55.536382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.977 [2024-07-15 17:08:55.536396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.536402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.536408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.977 [2024-07-15 17:08:55.536422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.546339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.977 [2024-07-15 17:08:55.546405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.977 [2024-07-15 17:08:55.546420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.546426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.546432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:48.977 [2024-07-15 17:08:55.546446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.556436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.977 [2024-07-15 17:08:55.556517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.977 [2024-07-15 17:08:55.556544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.556556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.556565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:48.977 [2024-07-15 17:08:55.556587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.566416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.977 [2024-07-15 17:08:55.566476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.977 [2024-07-15 17:08:55.566493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.566499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.566505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:48.977 [2024-07-15 17:08:55.566519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.576441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.977 [2024-07-15 17:08:55.576506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.977 [2024-07-15 17:08:55.576522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.576528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.576534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:48.977 [2024-07-15 17:08:55.576548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.586458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.977 [2024-07-15 17:08:55.586517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.977 [2024-07-15 17:08:55.586532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.586539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.586545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:48.977 [2024-07-15 17:08:55.586558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.596517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.977 [2024-07-15 17:08:55.596588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.977 [2024-07-15 17:08:55.596603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.596614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.596619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:48.977 [2024-07-15 17:08:55.596632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.606524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.977 [2024-07-15 17:08:55.606588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.977 [2024-07-15 17:08:55.606604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.606611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.606617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:48.977 [2024-07-15 17:08:55.606631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.616539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.977 [2024-07-15 17:08:55.616607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.977 [2024-07-15 17:08:55.616623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.977 [2024-07-15 17:08:55.616630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.977 [2024-07-15 17:08:55.616635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:48.977 [2024-07-15 17:08:55.616649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.977 qpair failed and we were unable to recover it. 00:26:48.977 [2024-07-15 17:08:55.626551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.978 [2024-07-15 17:08:55.626619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.978 [2024-07-15 17:08:55.626634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.978 [2024-07-15 17:08:55.626640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.978 [2024-07-15 17:08:55.626646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:48.978 [2024-07-15 17:08:55.626660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.978 qpair failed and we were unable to recover it. 00:26:48.978 [2024-07-15 17:08:55.636625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.978 [2024-07-15 17:08:55.636707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.978 [2024-07-15 17:08:55.636721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.978 [2024-07-15 17:08:55.636728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.978 [2024-07-15 17:08:55.636734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:48.978 [2024-07-15 17:08:55.636747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.978 qpair failed and we were unable to recover it. 00:26:49.239 [2024-07-15 17:08:55.646643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.239 [2024-07-15 17:08:55.646706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.239 [2024-07-15 17:08:55.646721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.239 [2024-07-15 17:08:55.646728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.239 [2024-07-15 17:08:55.646733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.239 [2024-07-15 17:08:55.646748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.239 qpair failed and we were unable to recover it. 00:26:49.239 [2024-07-15 17:08:55.656718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.239 [2024-07-15 17:08:55.656787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.239 [2024-07-15 17:08:55.656803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.239 [2024-07-15 17:08:55.656809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.239 [2024-07-15 17:08:55.656815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.239 [2024-07-15 17:08:55.656829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.239 qpair failed and we were unable to recover it. 00:26:49.239 [2024-07-15 17:08:55.666718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.239 [2024-07-15 17:08:55.666781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.239 [2024-07-15 17:08:55.666796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.239 [2024-07-15 17:08:55.666802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.239 [2024-07-15 17:08:55.666808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.239 [2024-07-15 17:08:55.666821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.239 qpair failed and we were unable to recover it. 00:26:49.239 [2024-07-15 17:08:55.676733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.239 [2024-07-15 17:08:55.676795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.239 [2024-07-15 17:08:55.676810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.239 [2024-07-15 17:08:55.676816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.239 [2024-07-15 17:08:55.676822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.239 [2024-07-15 17:08:55.676836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.239 qpair failed and we were unable to recover it. 00:26:49.239 [2024-07-15 17:08:55.686756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.239 [2024-07-15 17:08:55.686813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.239 [2024-07-15 17:08:55.686830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.239 [2024-07-15 17:08:55.686841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.239 [2024-07-15 17:08:55.686847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.239 [2024-07-15 17:08:55.686862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.239 qpair failed and we were unable to recover it. 00:26:49.239 [2024-07-15 17:08:55.696794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.239 [2024-07-15 17:08:55.696855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.239 [2024-07-15 17:08:55.696871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.239 [2024-07-15 17:08:55.696877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.239 [2024-07-15 17:08:55.696883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.239 [2024-07-15 17:08:55.696896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.239 qpair failed and we were unable to recover it. 00:26:49.239 [2024-07-15 17:08:55.706824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.239 [2024-07-15 17:08:55.706892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.239 [2024-07-15 17:08:55.706907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.239 [2024-07-15 17:08:55.706913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.239 [2024-07-15 17:08:55.706919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.239 [2024-07-15 17:08:55.706933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.239 qpair failed and we were unable to recover it. 00:26:49.239 [2024-07-15 17:08:55.716848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.239 [2024-07-15 17:08:55.716907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.239 [2024-07-15 17:08:55.716922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.239 [2024-07-15 17:08:55.716929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.239 [2024-07-15 17:08:55.716935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.239 [2024-07-15 17:08:55.716948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.239 qpair failed and we were unable to recover it. 00:26:49.239 [2024-07-15 17:08:55.726898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.239 [2024-07-15 17:08:55.726965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.239 [2024-07-15 17:08:55.726981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.239 [2024-07-15 17:08:55.726987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.239 [2024-07-15 17:08:55.726993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.239 [2024-07-15 17:08:55.727007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.239 qpair failed and we were unable to recover it. 00:26:49.239 [2024-07-15 17:08:55.736920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.239 [2024-07-15 17:08:55.736985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.239 [2024-07-15 17:08:55.737001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.239 [2024-07-15 17:08:55.737008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.239 [2024-07-15 17:08:55.737013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.239 [2024-07-15 17:08:55.737027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.239 qpair failed and we were unable to recover it. 00:26:49.239 [2024-07-15 17:08:55.747024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.239 [2024-07-15 17:08:55.747114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.239 [2024-07-15 17:08:55.747129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.239 [2024-07-15 17:08:55.747135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.239 [2024-07-15 17:08:55.747141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.747155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.756977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.757037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.757052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.757059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.757064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.757077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.767031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.767089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.767104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.767111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.767117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.767131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.777081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.777148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.777163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.777173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.777179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.777192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.787127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.787229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.787245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.787252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.787258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.787273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.797083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.797141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.797157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.797164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.797170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.797183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.807116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.807174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.807190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.807196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.807202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.807216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.817152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.817211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.817236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.817243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.817249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.817263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.827178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.827248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.827263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.827270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.827276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.827289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.837223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.837289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.837305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.837312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.837317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.837331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.847231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.847297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.847312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.847319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.847325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.847339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.857286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.857347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.857362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.857369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.857375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.857388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.867321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.867404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.867422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.867428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.867434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.867448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.877314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.877376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.877391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.877398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.877404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.877418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.887326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.240 [2024-07-15 17:08:55.887390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.240 [2024-07-15 17:08:55.887405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.240 [2024-07-15 17:08:55.887412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.240 [2024-07-15 17:08:55.887418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.240 [2024-07-15 17:08:55.887431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.240 qpair failed and we were unable to recover it. 00:26:49.240 [2024-07-15 17:08:55.897366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.241 [2024-07-15 17:08:55.897425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.241 [2024-07-15 17:08:55.897440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.241 [2024-07-15 17:08:55.897446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.241 [2024-07-15 17:08:55.897452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.241 [2024-07-15 17:08:55.897465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.241 qpair failed and we were unable to recover it. 00:26:49.500 [2024-07-15 17:08:55.907444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.500 [2024-07-15 17:08:55.907506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.500 [2024-07-15 17:08:55.907521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.500 [2024-07-15 17:08:55.907528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.500 [2024-07-15 17:08:55.907535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.500 [2024-07-15 17:08:55.907549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.500 qpair failed and we were unable to recover it. 00:26:49.500 [2024-07-15 17:08:55.917431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.500 [2024-07-15 17:08:55.917494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.500 [2024-07-15 17:08:55.917509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.500 [2024-07-15 17:08:55.917516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.500 [2024-07-15 17:08:55.917522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.500 [2024-07-15 17:08:55.917535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.501 [2024-07-15 17:08:55.927449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.501 [2024-07-15 17:08:55.927509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.501 [2024-07-15 17:08:55.927524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.501 [2024-07-15 17:08:55.927530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.501 [2024-07-15 17:08:55.927536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.501 [2024-07-15 17:08:55.927550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.501 [2024-07-15 17:08:55.937517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.501 [2024-07-15 17:08:55.937598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.501 [2024-07-15 17:08:55.937613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.501 [2024-07-15 17:08:55.937619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.501 [2024-07-15 17:08:55.937625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.501 [2024-07-15 17:08:55.937639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.501 [2024-07-15 17:08:55.947497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.501 [2024-07-15 17:08:55.947557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.501 [2024-07-15 17:08:55.947571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.501 [2024-07-15 17:08:55.947578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.501 [2024-07-15 17:08:55.947584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.501 [2024-07-15 17:08:55.947597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.501 [2024-07-15 17:08:55.957540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.501 [2024-07-15 17:08:55.957603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.501 [2024-07-15 17:08:55.957621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.501 [2024-07-15 17:08:55.957628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.501 [2024-07-15 17:08:55.957634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.501 [2024-07-15 17:08:55.957647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.501 [2024-07-15 17:08:55.967573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.501 [2024-07-15 17:08:55.967632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.501 [2024-07-15 17:08:55.967647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.501 [2024-07-15 17:08:55.967654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.501 [2024-07-15 17:08:55.967660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.501 [2024-07-15 17:08:55.967674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.501 [2024-07-15 17:08:55.977593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.501 [2024-07-15 17:08:55.977655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.501 [2024-07-15 17:08:55.977670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.501 [2024-07-15 17:08:55.977677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.501 [2024-07-15 17:08:55.977683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.501 [2024-07-15 17:08:55.977696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.501 [2024-07-15 17:08:55.987603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.501 [2024-07-15 17:08:55.987665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.501 [2024-07-15 17:08:55.987680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.501 [2024-07-15 17:08:55.987686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.501 [2024-07-15 17:08:55.987692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.501 [2024-07-15 17:08:55.987705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.501 [2024-07-15 17:08:55.997630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.501 [2024-07-15 17:08:55.997690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.501 [2024-07-15 17:08:55.997705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.501 [2024-07-15 17:08:55.997712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.501 [2024-07-15 17:08:55.997717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.501 [2024-07-15 17:08:55.997734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.501 [2024-07-15 17:08:56.007662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.501 [2024-07-15 17:08:56.007723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.501 [2024-07-15 17:08:56.007738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.501 [2024-07-15 17:08:56.007744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.501 [2024-07-15 17:08:56.007750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.501 [2024-07-15 17:08:56.007763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.501 [2024-07-15 17:08:56.017701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.501 [2024-07-15 17:08:56.017770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.501 [2024-07-15 17:08:56.017785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.501 [2024-07-15 17:08:56.017791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.501 [2024-07-15 17:08:56.017797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.501 [2024-07-15 17:08:56.017811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.501 [2024-07-15 17:08:56.027726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.501 [2024-07-15 17:08:56.027789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.501 [2024-07-15 17:08:56.027804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.501 [2024-07-15 17:08:56.027810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.501 [2024-07-15 17:08:56.027815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.501 [2024-07-15 17:08:56.027830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.501 [2024-07-15 17:08:56.037751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.501 [2024-07-15 17:08:56.037809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.501 [2024-07-15 17:08:56.037824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.501 [2024-07-15 17:08:56.037831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.501 [2024-07-15 17:08:56.037837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.501 [2024-07-15 17:08:56.037851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.501 [2024-07-15 17:08:56.047787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.501 [2024-07-15 17:08:56.047848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.501 [2024-07-15 17:08:56.047866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.501 [2024-07-15 17:08:56.047872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.501 [2024-07-15 17:08:56.047878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.501 [2024-07-15 17:08:56.047891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.501 qpair failed and we were unable to recover it. 00:26:49.502 [2024-07-15 17:08:56.057752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.502 [2024-07-15 17:08:56.057820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.502 [2024-07-15 17:08:56.057835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.502 [2024-07-15 17:08:56.057841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.502 [2024-07-15 17:08:56.057847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.502 [2024-07-15 17:08:56.057861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.502 qpair failed and we were unable to recover it. 00:26:49.502 [2024-07-15 17:08:56.067914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.502 [2024-07-15 17:08:56.067989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.502 [2024-07-15 17:08:56.068005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.502 [2024-07-15 17:08:56.068011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.502 [2024-07-15 17:08:56.068017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.502 [2024-07-15 17:08:56.068031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.502 qpair failed and we were unable to recover it. 00:26:49.502 [2024-07-15 17:08:56.077905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.502 [2024-07-15 17:08:56.077974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.502 [2024-07-15 17:08:56.077989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.502 [2024-07-15 17:08:56.077995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.502 [2024-07-15 17:08:56.078001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.502 [2024-07-15 17:08:56.078015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.502 qpair failed and we were unable to recover it. 00:26:49.502 [2024-07-15 17:08:56.087836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.502 [2024-07-15 17:08:56.087899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.502 [2024-07-15 17:08:56.087914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.502 [2024-07-15 17:08:56.087921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.502 [2024-07-15 17:08:56.087927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.502 [2024-07-15 17:08:56.087943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.502 qpair failed and we were unable to recover it. 00:26:49.502 [2024-07-15 17:08:56.097921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.502 [2024-07-15 17:08:56.097986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.502 [2024-07-15 17:08:56.098001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.502 [2024-07-15 17:08:56.098007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.502 [2024-07-15 17:08:56.098013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.502 [2024-07-15 17:08:56.098027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.502 qpair failed and we were unable to recover it. 00:26:49.502 [2024-07-15 17:08:56.107934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.502 [2024-07-15 17:08:56.108015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.502 [2024-07-15 17:08:56.108031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.502 [2024-07-15 17:08:56.108038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.502 [2024-07-15 17:08:56.108044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.502 [2024-07-15 17:08:56.108057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.502 qpair failed and we were unable to recover it. 00:26:49.502 [2024-07-15 17:08:56.118006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.502 [2024-07-15 17:08:56.118071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.502 [2024-07-15 17:08:56.118088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.502 [2024-07-15 17:08:56.118094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.502 [2024-07-15 17:08:56.118100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.502 [2024-07-15 17:08:56.118114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.502 qpair failed and we were unable to recover it. 00:26:49.502 [2024-07-15 17:08:56.128019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.502 [2024-07-15 17:08:56.128081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.502 [2024-07-15 17:08:56.128096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.502 [2024-07-15 17:08:56.128103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.502 [2024-07-15 17:08:56.128108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.502 [2024-07-15 17:08:56.128122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.502 qpair failed and we were unable to recover it. 00:26:49.502 [2024-07-15 17:08:56.137993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.502 [2024-07-15 17:08:56.138056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.502 [2024-07-15 17:08:56.138077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.502 [2024-07-15 17:08:56.138084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.502 [2024-07-15 17:08:56.138089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.502 [2024-07-15 17:08:56.138103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.502 qpair failed and we were unable to recover it. 00:26:49.502 [2024-07-15 17:08:56.148074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.502 [2024-07-15 17:08:56.148139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.502 [2024-07-15 17:08:56.148154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.502 [2024-07-15 17:08:56.148161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.502 [2024-07-15 17:08:56.148167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.502 [2024-07-15 17:08:56.148180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.502 qpair failed and we were unable to recover it. 00:26:49.502 [2024-07-15 17:08:56.158040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.502 [2024-07-15 17:08:56.158101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.502 [2024-07-15 17:08:56.158116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.502 [2024-07-15 17:08:56.158123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.502 [2024-07-15 17:08:56.158128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.502 [2024-07-15 17:08:56.158141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.502 qpair failed and we were unable to recover it. 00:26:49.764 [2024-07-15 17:08:56.168115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.764 [2024-07-15 17:08:56.168177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.764 [2024-07-15 17:08:56.168194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.764 [2024-07-15 17:08:56.168200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.764 [2024-07-15 17:08:56.168206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.764 [2024-07-15 17:08:56.168220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.764 qpair failed and we were unable to recover it. 00:26:49.764 [2024-07-15 17:08:56.178194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.764 [2024-07-15 17:08:56.178256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.764 [2024-07-15 17:08:56.178272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.764 [2024-07-15 17:08:56.178278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.764 [2024-07-15 17:08:56.178284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.764 [2024-07-15 17:08:56.178301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.764 qpair failed and we were unable to recover it. 00:26:49.764 [2024-07-15 17:08:56.188159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.764 [2024-07-15 17:08:56.188231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.764 [2024-07-15 17:08:56.188246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.764 [2024-07-15 17:08:56.188253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.764 [2024-07-15 17:08:56.188259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.764 [2024-07-15 17:08:56.188273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.764 qpair failed and we were unable to recover it. 00:26:49.764 [2024-07-15 17:08:56.198160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.764 [2024-07-15 17:08:56.198221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.764 [2024-07-15 17:08:56.198240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.764 [2024-07-15 17:08:56.198247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.764 [2024-07-15 17:08:56.198253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.764 [2024-07-15 17:08:56.198266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.764 qpair failed and we were unable to recover it. 00:26:49.764 [2024-07-15 17:08:56.208233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.764 [2024-07-15 17:08:56.208296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.764 [2024-07-15 17:08:56.208313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.764 [2024-07-15 17:08:56.208320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.764 [2024-07-15 17:08:56.208326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.764 [2024-07-15 17:08:56.208339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.764 qpair failed and we were unable to recover it. 00:26:49.764 [2024-07-15 17:08:56.218258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.764 [2024-07-15 17:08:56.218320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.764 [2024-07-15 17:08:56.218336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.764 [2024-07-15 17:08:56.218342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.764 [2024-07-15 17:08:56.218348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.764 [2024-07-15 17:08:56.218362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.764 qpair failed and we were unable to recover it. 00:26:49.764 [2024-07-15 17:08:56.228241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.764 [2024-07-15 17:08:56.228305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.764 [2024-07-15 17:08:56.228324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.764 [2024-07-15 17:08:56.228330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.764 [2024-07-15 17:08:56.228336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.764 [2024-07-15 17:08:56.228350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.764 qpair failed and we were unable to recover it. 00:26:49.764 [2024-07-15 17:08:56.238328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.764 [2024-07-15 17:08:56.238385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.764 [2024-07-15 17:08:56.238400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.764 [2024-07-15 17:08:56.238407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.764 [2024-07-15 17:08:56.238413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.764 [2024-07-15 17:08:56.238426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.764 qpair failed and we were unable to recover it. 00:26:49.764 [2024-07-15 17:08:56.248297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.764 [2024-07-15 17:08:56.248364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.764 [2024-07-15 17:08:56.248379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.764 [2024-07-15 17:08:56.248386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.764 [2024-07-15 17:08:56.248391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.764 [2024-07-15 17:08:56.248405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.764 qpair failed and we were unable to recover it. 00:26:49.764 [2024-07-15 17:08:56.258406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.764 [2024-07-15 17:08:56.258469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.764 [2024-07-15 17:08:56.258485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.764 [2024-07-15 17:08:56.258492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.764 [2024-07-15 17:08:56.258498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.764 [2024-07-15 17:08:56.258512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.764 qpair failed and we were unable to recover it. 00:26:49.764 [2024-07-15 17:08:56.268342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.764 [2024-07-15 17:08:56.268403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.764 [2024-07-15 17:08:56.268418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.764 [2024-07-15 17:08:56.268425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.764 [2024-07-15 17:08:56.268434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.764 [2024-07-15 17:08:56.268448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.764 qpair failed and we were unable to recover it. 00:26:49.764 [2024-07-15 17:08:56.278440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.764 [2024-07-15 17:08:56.278503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.764 [2024-07-15 17:08:56.278517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.764 [2024-07-15 17:08:56.278524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.764 [2024-07-15 17:08:56.278530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.764 [2024-07-15 17:08:56.278543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.764 qpair failed and we were unable to recover it. 00:26:49.764 [2024-07-15 17:08:56.288473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.764 [2024-07-15 17:08:56.288554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.288569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.288576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.288581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.288595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.298474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.298536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.298551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.298558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.298564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.298577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.308468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.308533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.308548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.308555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.308560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.308574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.318565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.318629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.318645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.318652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.318658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.318671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.328526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.328587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.328602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.328609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.328615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.328628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.338553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.338664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.338680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.338686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.338692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.338706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.348613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.348677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.348692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.348698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.348704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.348718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.358608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.358669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.358684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.358691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.358700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.358714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.368646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.368708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.368724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.368730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.368736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.368749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.378668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.378730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.378746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.378752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.378758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.378771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.388744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.388831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.388846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.388852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.388858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.388871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.398844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.398905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.398921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.398928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.398933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.398947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.408759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.408822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.408837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.408844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.408850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.408864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.418784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.418849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.418864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.418871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.418877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.765 [2024-07-15 17:08:56.418890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.765 qpair failed and we were unable to recover it. 00:26:49.765 [2024-07-15 17:08:56.428864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.765 [2024-07-15 17:08:56.428924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.765 [2024-07-15 17:08:56.428939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.765 [2024-07-15 17:08:56.428946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.765 [2024-07-15 17:08:56.428951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:49.766 [2024-07-15 17:08:56.428965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.766 qpair failed and we were unable to recover it. 00:26:50.026 [2024-07-15 17:08:56.438840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.026 [2024-07-15 17:08:56.438900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.026 [2024-07-15 17:08:56.438915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.026 [2024-07-15 17:08:56.438922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.026 [2024-07-15 17:08:56.438928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.026 [2024-07-15 17:08:56.438941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.026 qpair failed and we were unable to recover it. 00:26:50.026 [2024-07-15 17:08:56.448936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.026 [2024-07-15 17:08:56.448999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.026 [2024-07-15 17:08:56.449015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.026 [2024-07-15 17:08:56.449022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.026 [2024-07-15 17:08:56.449031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.026 [2024-07-15 17:08:56.449045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.026 qpair failed and we were unable to recover it. 00:26:50.026 [2024-07-15 17:08:56.458907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.026 [2024-07-15 17:08:56.458970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.026 [2024-07-15 17:08:56.458986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.026 [2024-07-15 17:08:56.458992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.026 [2024-07-15 17:08:56.458998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.026 [2024-07-15 17:08:56.459012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.026 qpair failed and we were unable to recover it. 00:26:50.026 [2024-07-15 17:08:56.468914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.468974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.468990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.468996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.469002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.469017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.479019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.479084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.479100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.479107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.479113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.479127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.488971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.489030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.489046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.489053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.489059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.489073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.499058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.499124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.499139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.499146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.499153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.499166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.509136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.509199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.509214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.509221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.509231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.509245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.519134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.519194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.519208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.519215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.519221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.519238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.529200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.529263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.529279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.529285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.529291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.529304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.539195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.539268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.539283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.539294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.539300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.539314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.549183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.549248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.549264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.549270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.549276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.549290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.559244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.559306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.559321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.559327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.559333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.559347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.569264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.569342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.569357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.569364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.569369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.569383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.579259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.579323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.579338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.579344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.579350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.579364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.589334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.589392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.589407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.589413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.589419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.589433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.599336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.599400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.599415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.027 [2024-07-15 17:08:56.599421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.027 [2024-07-15 17:08:56.599427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.027 [2024-07-15 17:08:56.599440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.027 qpair failed and we were unable to recover it. 00:26:50.027 [2024-07-15 17:08:56.609386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.027 [2024-07-15 17:08:56.609490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.027 [2024-07-15 17:08:56.609512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.028 [2024-07-15 17:08:56.609519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.028 [2024-07-15 17:08:56.609525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.028 [2024-07-15 17:08:56.609538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.028 qpair failed and we were unable to recover it. 00:26:50.028 [2024-07-15 17:08:56.619461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.028 [2024-07-15 17:08:56.619525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.028 [2024-07-15 17:08:56.619540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.028 [2024-07-15 17:08:56.619547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.028 [2024-07-15 17:08:56.619553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.028 [2024-07-15 17:08:56.619566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.028 qpair failed and we were unable to recover it. 00:26:50.028 [2024-07-15 17:08:56.629438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.028 [2024-07-15 17:08:56.629503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.028 [2024-07-15 17:08:56.629518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.028 [2024-07-15 17:08:56.629527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.028 [2024-07-15 17:08:56.629533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.028 [2024-07-15 17:08:56.629546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.028 qpair failed and we were unable to recover it. 00:26:50.028 [2024-07-15 17:08:56.639507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.028 [2024-07-15 17:08:56.639597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.028 [2024-07-15 17:08:56.639613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.028 [2024-07-15 17:08:56.639619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.028 [2024-07-15 17:08:56.639625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.028 [2024-07-15 17:08:56.639638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.028 qpair failed and we were unable to recover it. 00:26:50.028 [2024-07-15 17:08:56.649509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.028 [2024-07-15 17:08:56.649569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.028 [2024-07-15 17:08:56.649584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.028 [2024-07-15 17:08:56.649591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.028 [2024-07-15 17:08:56.649596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.028 [2024-07-15 17:08:56.649610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.028 qpair failed and we were unable to recover it. 00:26:50.028 [2024-07-15 17:08:56.659541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.028 [2024-07-15 17:08:56.659601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.028 [2024-07-15 17:08:56.659617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.028 [2024-07-15 17:08:56.659623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.028 [2024-07-15 17:08:56.659630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.028 [2024-07-15 17:08:56.659643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.028 qpair failed and we were unable to recover it. 00:26:50.028 [2024-07-15 17:08:56.669505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.028 [2024-07-15 17:08:56.669572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.028 [2024-07-15 17:08:56.669587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.028 [2024-07-15 17:08:56.669593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.028 [2024-07-15 17:08:56.669599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.028 [2024-07-15 17:08:56.669612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.028 qpair failed and we were unable to recover it. 00:26:50.028 [2024-07-15 17:08:56.679596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.028 [2024-07-15 17:08:56.679658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.028 [2024-07-15 17:08:56.679673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.028 [2024-07-15 17:08:56.679679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.028 [2024-07-15 17:08:56.679685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.028 [2024-07-15 17:08:56.679698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.028 qpair failed and we were unable to recover it. 00:26:50.028 [2024-07-15 17:08:56.689679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.028 [2024-07-15 17:08:56.689765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.028 [2024-07-15 17:08:56.689782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.028 [2024-07-15 17:08:56.689789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.028 [2024-07-15 17:08:56.689796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.028 [2024-07-15 17:08:56.689810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.028 qpair failed and we were unable to recover it. 00:26:50.289 [2024-07-15 17:08:56.699630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.289 [2024-07-15 17:08:56.699696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.289 [2024-07-15 17:08:56.699712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.289 [2024-07-15 17:08:56.699719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.289 [2024-07-15 17:08:56.699725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.289 [2024-07-15 17:08:56.699739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.289 qpair failed and we were unable to recover it. 00:26:50.289 [2024-07-15 17:08:56.709676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.289 [2024-07-15 17:08:56.709740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.289 [2024-07-15 17:08:56.709755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.289 [2024-07-15 17:08:56.709762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.289 [2024-07-15 17:08:56.709768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.289 [2024-07-15 17:08:56.709781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.289 qpair failed and we were unable to recover it. 00:26:50.289 [2024-07-15 17:08:56.719712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.289 [2024-07-15 17:08:56.719777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.289 [2024-07-15 17:08:56.719793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.289 [2024-07-15 17:08:56.719802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.289 [2024-07-15 17:08:56.719808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.289 [2024-07-15 17:08:56.719822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.289 qpair failed and we were unable to recover it. 00:26:50.289 [2024-07-15 17:08:56.729770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.289 [2024-07-15 17:08:56.729830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.289 [2024-07-15 17:08:56.729845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.289 [2024-07-15 17:08:56.729852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.289 [2024-07-15 17:08:56.729858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.289 [2024-07-15 17:08:56.729871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.289 qpair failed and we were unable to recover it. 00:26:50.289 [2024-07-15 17:08:56.739770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.289 [2024-07-15 17:08:56.739831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.289 [2024-07-15 17:08:56.739846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.289 [2024-07-15 17:08:56.739852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.289 [2024-07-15 17:08:56.739858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.289 [2024-07-15 17:08:56.739871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.289 qpair failed and we were unable to recover it. 00:26:50.289 [2024-07-15 17:08:56.749759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.289 [2024-07-15 17:08:56.749858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.289 [2024-07-15 17:08:56.749873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.289 [2024-07-15 17:08:56.749880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.289 [2024-07-15 17:08:56.749886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.289 [2024-07-15 17:08:56.749900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.289 qpair failed and we were unable to recover it. 00:26:50.289 [2024-07-15 17:08:56.759864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.289 [2024-07-15 17:08:56.759925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.289 [2024-07-15 17:08:56.759940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.289 [2024-07-15 17:08:56.759946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.289 [2024-07-15 17:08:56.759952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.289 [2024-07-15 17:08:56.759965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.289 qpair failed and we were unable to recover it. 00:26:50.289 [2024-07-15 17:08:56.769841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.289 [2024-07-15 17:08:56.769903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.289 [2024-07-15 17:08:56.769919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.289 [2024-07-15 17:08:56.769925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.289 [2024-07-15 17:08:56.769931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.289 [2024-07-15 17:08:56.769944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.289 qpair failed and we were unable to recover it. 00:26:50.289 [2024-07-15 17:08:56.779919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.289 [2024-07-15 17:08:56.779983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.289 [2024-07-15 17:08:56.779998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.780005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.780010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.780024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.789877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.789979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.789994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.790001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.790007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.790021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.799926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.799983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.799998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.800004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.800010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.800024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.809926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.809984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.810003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.810010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.810015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.810029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.819990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.820049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.820064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.820071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.820077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.820090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.830056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.830116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.830131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.830138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.830145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.830158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.840101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.840181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.840196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.840202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.840208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.840221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.850075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.850138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.850153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.850160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.850166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.850179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.860156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.860218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.860237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.860244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.860250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.860264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.870140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.870202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.870217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.870227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.870234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.870247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.880217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.880280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.880295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.880302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.880308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.880321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.890187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.890255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.890271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.890278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.890284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.890298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.900247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.900308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.900330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.900337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.900343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.900357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.910254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.910327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.910342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.910349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.910355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.910368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.290 qpair failed and we were unable to recover it. 00:26:50.290 [2024-07-15 17:08:56.920280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.290 [2024-07-15 17:08:56.920345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.290 [2024-07-15 17:08:56.920361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.290 [2024-07-15 17:08:56.920367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.290 [2024-07-15 17:08:56.920373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.290 [2024-07-15 17:08:56.920386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.291 qpair failed and we were unable to recover it. 00:26:50.291 [2024-07-15 17:08:56.930322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.291 [2024-07-15 17:08:56.930389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.291 [2024-07-15 17:08:56.930404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.291 [2024-07-15 17:08:56.930410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.291 [2024-07-15 17:08:56.930416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.291 [2024-07-15 17:08:56.930430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.291 qpair failed and we were unable to recover it. 00:26:50.291 [2024-07-15 17:08:56.940348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.291 [2024-07-15 17:08:56.940438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.291 [2024-07-15 17:08:56.940453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.291 [2024-07-15 17:08:56.940459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.291 [2024-07-15 17:08:56.940465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.291 [2024-07-15 17:08:56.940482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.291 qpair failed and we were unable to recover it. 00:26:50.291 [2024-07-15 17:08:56.950401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.291 [2024-07-15 17:08:56.950510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.291 [2024-07-15 17:08:56.950525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.291 [2024-07-15 17:08:56.950532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.291 [2024-07-15 17:08:56.950538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.291 [2024-07-15 17:08:56.950551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.291 qpair failed and we were unable to recover it. 00:26:50.551 [2024-07-15 17:08:56.960389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.551 [2024-07-15 17:08:56.960451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.551 [2024-07-15 17:08:56.960465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.551 [2024-07-15 17:08:56.960472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.551 [2024-07-15 17:08:56.960478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.551 [2024-07-15 17:08:56.960492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.551 qpair failed and we were unable to recover it. 00:26:50.551 [2024-07-15 17:08:56.970432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.551 [2024-07-15 17:08:56.970495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.551 [2024-07-15 17:08:56.970510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.551 [2024-07-15 17:08:56.970517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.551 [2024-07-15 17:08:56.970522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.551 [2024-07-15 17:08:56.970535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.551 qpair failed and we were unable to recover it. 00:26:50.551 [2024-07-15 17:08:56.980470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.551 [2024-07-15 17:08:56.980570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.552 [2024-07-15 17:08:56.980585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.552 [2024-07-15 17:08:56.980591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.552 [2024-07-15 17:08:56.980597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.552 [2024-07-15 17:08:56.980612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.552 qpair failed and we were unable to recover it. 00:26:50.552 [2024-07-15 17:08:56.990474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.552 [2024-07-15 17:08:56.990537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.552 [2024-07-15 17:08:56.990555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.552 [2024-07-15 17:08:56.990561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.552 [2024-07-15 17:08:56.990567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.552 [2024-07-15 17:08:56.990580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.552 qpair failed and we were unable to recover it. 00:26:50.552 [2024-07-15 17:08:57.000497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.552 [2024-07-15 17:08:57.000555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.552 [2024-07-15 17:08:57.000570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.552 [2024-07-15 17:08:57.000576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.552 [2024-07-15 17:08:57.000582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.552 [2024-07-15 17:08:57.000595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.552 qpair failed and we were unable to recover it. 00:26:50.552 [2024-07-15 17:08:57.010553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.552 [2024-07-15 17:08:57.010624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.552 [2024-07-15 17:08:57.010639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.552 [2024-07-15 17:08:57.010646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.552 [2024-07-15 17:08:57.010651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.552 [2024-07-15 17:08:57.010664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.552 qpair failed and we were unable to recover it. 00:26:50.552 [2024-07-15 17:08:57.020592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.552 [2024-07-15 17:08:57.020656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.552 [2024-07-15 17:08:57.020671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.552 [2024-07-15 17:08:57.020678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.552 [2024-07-15 17:08:57.020683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.552 [2024-07-15 17:08:57.020697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.552 qpair failed and we were unable to recover it. 00:26:50.552 [2024-07-15 17:08:57.030590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.552 [2024-07-15 17:08:57.030658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.552 [2024-07-15 17:08:57.030673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.552 [2024-07-15 17:08:57.030679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.552 [2024-07-15 17:08:57.030685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.552 [2024-07-15 17:08:57.030701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.552 qpair failed and we were unable to recover it. 00:26:50.552 [2024-07-15 17:08:57.040664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.552 [2024-07-15 17:08:57.040724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.552 [2024-07-15 17:08:57.040739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.552 [2024-07-15 17:08:57.040746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.552 [2024-07-15 17:08:57.040752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.552 [2024-07-15 17:08:57.040765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.552 qpair failed and we were unable to recover it. 00:26:50.552 [2024-07-15 17:08:57.050672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.552 [2024-07-15 17:08:57.050738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.552 [2024-07-15 17:08:57.050752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.552 [2024-07-15 17:08:57.050759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.552 [2024-07-15 17:08:57.050764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.552 [2024-07-15 17:08:57.050778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.552 qpair failed and we were unable to recover it. 00:26:50.552 [2024-07-15 17:08:57.060708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.552 [2024-07-15 17:08:57.060769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.552 [2024-07-15 17:08:57.060784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.552 [2024-07-15 17:08:57.060790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.552 [2024-07-15 17:08:57.060796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.552 [2024-07-15 17:08:57.060809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.552 qpair failed and we were unable to recover it. 00:26:50.552 [2024-07-15 17:08:57.070755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.552 [2024-07-15 17:08:57.070820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.552 [2024-07-15 17:08:57.070834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.552 [2024-07-15 17:08:57.070841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.552 [2024-07-15 17:08:57.070847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.552 [2024-07-15 17:08:57.070860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.552 qpair failed and we were unable to recover it. 00:26:50.552 [2024-07-15 17:08:57.080730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.552 [2024-07-15 17:08:57.080790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.552 [2024-07-15 17:08:57.080808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.552 [2024-07-15 17:08:57.080815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.552 [2024-07-15 17:08:57.080821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.552 [2024-07-15 17:08:57.080834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.552 qpair failed and we were unable to recover it. 00:26:50.552 [2024-07-15 17:08:57.090759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.552 [2024-07-15 17:08:57.090822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.552 [2024-07-15 17:08:57.090837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.552 [2024-07-15 17:08:57.090843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.552 [2024-07-15 17:08:57.090849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.552 [2024-07-15 17:08:57.090863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.552 qpair failed and we were unable to recover it. 00:26:50.552 [2024-07-15 17:08:57.100838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.552 [2024-07-15 17:08:57.100900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.552 [2024-07-15 17:08:57.100915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.552 [2024-07-15 17:08:57.100922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.552 [2024-07-15 17:08:57.100928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.552 [2024-07-15 17:08:57.100941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.552 qpair failed and we were unable to recover it. 00:26:50.552 [2024-07-15 17:08:57.110862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.552 [2024-07-15 17:08:57.110925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.553 [2024-07-15 17:08:57.110940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.553 [2024-07-15 17:08:57.110947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.553 [2024-07-15 17:08:57.110952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.553 [2024-07-15 17:08:57.110965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.553 qpair failed and we were unable to recover it. 00:26:50.553 [2024-07-15 17:08:57.120897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.553 [2024-07-15 17:08:57.120960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.553 [2024-07-15 17:08:57.120975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.553 [2024-07-15 17:08:57.120981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.553 [2024-07-15 17:08:57.120987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.553 [2024-07-15 17:08:57.121004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.553 qpair failed and we were unable to recover it. 00:26:50.553 [2024-07-15 17:08:57.130868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.553 [2024-07-15 17:08:57.130929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.553 [2024-07-15 17:08:57.130943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.553 [2024-07-15 17:08:57.130950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.553 [2024-07-15 17:08:57.130956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.553 [2024-07-15 17:08:57.130969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.553 qpair failed and we were unable to recover it. 00:26:50.553 [2024-07-15 17:08:57.140924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.553 [2024-07-15 17:08:57.140989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.553 [2024-07-15 17:08:57.141004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.553 [2024-07-15 17:08:57.141010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.553 [2024-07-15 17:08:57.141016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.553 [2024-07-15 17:08:57.141029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.553 qpair failed and we were unable to recover it. 00:26:50.553 [2024-07-15 17:08:57.150918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.553 [2024-07-15 17:08:57.150982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.553 [2024-07-15 17:08:57.150997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.553 [2024-07-15 17:08:57.151003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.553 [2024-07-15 17:08:57.151009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.553 [2024-07-15 17:08:57.151023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.553 qpair failed and we were unable to recover it. 00:26:50.553 [2024-07-15 17:08:57.160957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.553 [2024-07-15 17:08:57.161018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.553 [2024-07-15 17:08:57.161033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.553 [2024-07-15 17:08:57.161039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.553 [2024-07-15 17:08:57.161045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.553 [2024-07-15 17:08:57.161058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.553 qpair failed and we were unable to recover it. 00:26:50.553 [2024-07-15 17:08:57.171018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.553 [2024-07-15 17:08:57.171080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.553 [2024-07-15 17:08:57.171098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.553 [2024-07-15 17:08:57.171105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.553 [2024-07-15 17:08:57.171111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.553 [2024-07-15 17:08:57.171124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.553 qpair failed and we were unable to recover it. 00:26:50.553 [2024-07-15 17:08:57.181014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.553 [2024-07-15 17:08:57.181074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.553 [2024-07-15 17:08:57.181089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.553 [2024-07-15 17:08:57.181096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.553 [2024-07-15 17:08:57.181101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.553 [2024-07-15 17:08:57.181114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.553 qpair failed and we were unable to recover it. 00:26:50.553 [2024-07-15 17:08:57.191036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.553 [2024-07-15 17:08:57.191095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.553 [2024-07-15 17:08:57.191109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.553 [2024-07-15 17:08:57.191116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.553 [2024-07-15 17:08:57.191122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.553 [2024-07-15 17:08:57.191135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.553 qpair failed and we were unable to recover it. 00:26:50.553 [2024-07-15 17:08:57.201045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.553 [2024-07-15 17:08:57.201110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.553 [2024-07-15 17:08:57.201124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.553 [2024-07-15 17:08:57.201131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.553 [2024-07-15 17:08:57.201136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.553 [2024-07-15 17:08:57.201150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.553 qpair failed and we were unable to recover it. 00:26:50.553 [2024-07-15 17:08:57.211044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.553 [2024-07-15 17:08:57.211107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.553 [2024-07-15 17:08:57.211123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.553 [2024-07-15 17:08:57.211130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.553 [2024-07-15 17:08:57.211139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.553 [2024-07-15 17:08:57.211153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.553 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.221178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.221242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.221257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.221264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.221269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.814 [2024-07-15 17:08:57.221283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.814 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.231196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.231258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.231273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.231280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.231285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.814 [2024-07-15 17:08:57.231299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.814 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.241244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.241310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.241325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.241332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.241338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.814 [2024-07-15 17:08:57.241351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.814 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.251215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.251279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.251295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.251301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.251307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.814 [2024-07-15 17:08:57.251321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.814 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.261222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.261292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.261309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.261315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.261321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.814 [2024-07-15 17:08:57.261336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.814 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.271275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.271340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.271355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.271361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.271367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.814 [2024-07-15 17:08:57.271380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.814 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.281311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.281368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.281384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.281390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.281395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.814 [2024-07-15 17:08:57.281408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.814 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.291333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.291396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.291411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.291418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.291423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.814 [2024-07-15 17:08:57.291437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.814 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.301371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.301433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.301448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.301454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.301463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.814 [2024-07-15 17:08:57.301476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.814 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.311387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.311453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.311468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.311474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.311480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.814 [2024-07-15 17:08:57.311494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.814 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.321429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.321487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.321503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.321509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.321515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.814 [2024-07-15 17:08:57.321528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.814 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.331465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.331528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.331543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.331549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.331555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.814 [2024-07-15 17:08:57.331568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.814 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.341492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.341553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.341568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.341575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.341580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd91ed0 00:26:50.814 [2024-07-15 17:08:57.341593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.814 qpair failed and we were unable to recover it. 00:26:50.814 [2024-07-15 17:08:57.351502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.814 [2024-07-15 17:08:57.351572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.814 [2024-07-15 17:08:57.351592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.814 [2024-07-15 17:08:57.351600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.814 [2024-07-15 17:08:57.351606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:50.815 [2024-07-15 17:08:57.351624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.815 qpair failed and we were unable to recover it. 00:26:50.815 [2024-07-15 17:08:57.361536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.815 [2024-07-15 17:08:57.361596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.815 [2024-07-15 17:08:57.361611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.815 [2024-07-15 17:08:57.361618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.815 [2024-07-15 17:08:57.361624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d4c000b90 00:26:50.815 [2024-07-15 17:08:57.361638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.815 qpair failed and we were unable to recover it. 00:26:50.815 [2024-07-15 17:08:57.371635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.815 [2024-07-15 17:08:57.371752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.815 [2024-07-15 17:08:57.371779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.815 [2024-07-15 17:08:57.371790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.815 [2024-07-15 17:08:57.371799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d44000b90 00:26:50.815 [2024-07-15 17:08:57.371821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.815 qpair failed and we were unable to recover it. 00:26:50.815 [2024-07-15 17:08:57.381663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.815 [2024-07-15 17:08:57.381750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.815 [2024-07-15 17:08:57.381766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.815 [2024-07-15 17:08:57.381773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.815 [2024-07-15 17:08:57.381779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d44000b90 00:26:50.815 [2024-07-15 17:08:57.381795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.815 qpair failed and we were unable to recover it. 00:26:50.815 [2024-07-15 17:08:57.381895] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:50.815 A controller has encountered a failure and is being reset. 00:26:50.815 [2024-07-15 17:08:57.391639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.815 [2024-07-15 17:08:57.391725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.815 [2024-07-15 17:08:57.391756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.815 [2024-07-15 17:08:57.391768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.815 [2024-07-15 17:08:57.391777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d54000b90 00:26:50.815 [2024-07-15 17:08:57.391800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:50.815 qpair failed and we were unable to recover it. 00:26:50.815 [2024-07-15 17:08:57.401645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.815 [2024-07-15 17:08:57.401713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.815 [2024-07-15 17:08:57.401730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.815 [2024-07-15 17:08:57.401737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.815 [2024-07-15 17:08:57.401743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d54000b90 00:26:50.815 [2024-07-15 17:08:57.401758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:50.815 qpair failed and we were unable to recover it. 00:26:50.815 Controller properly reset. 00:26:50.815 Initializing NVMe Controllers 00:26:50.815 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:50.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:50.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:50.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:50.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:50.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:50.815 Initialization complete. Launching workers. 00:26:50.815 Starting thread on core 1 00:26:50.815 Starting thread on core 2 00:26:50.815 Starting thread on core 3 00:26:50.815 Starting thread on core 0 00:26:50.815 17:08:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:50.815 00:26:50.815 real 0m11.220s 00:26:50.815 user 0m21.431s 00:26:50.815 sys 0m4.200s 00:26:50.815 17:08:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.815 17:08:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.815 ************************************ 00:26:50.815 END TEST nvmf_target_disconnect_tc2 00:26:50.815 ************************************ 00:26:50.815 17:08:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:50.815 17:08:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:50.815 17:08:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:50.815 17:08:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:50.815 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:50.815 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:50.815 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:50.815 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:50.815 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:50.815 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:50.815 rmmod nvme_tcp 00:26:51.105 rmmod nvme_fabrics 00:26:51.105 rmmod nvme_keyring 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 239046 ']' 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 239046 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 239046 ']' 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 239046 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 239046 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 239046' 00:26:51.105 killing process with pid 239046 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 239046 00:26:51.105 17:08:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 239046 00:26:51.364 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:51.364 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:51.364 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:51.364 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.364 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:51.364 17:08:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.364 17:08:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.364 17:08:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.271 17:08:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:53.271 00:26:53.271 real 0m19.256s 00:26:53.271 user 0m48.183s 00:26:53.271 sys 0m8.575s 00:26:53.271 17:08:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:53.271 17:08:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:53.271 ************************************ 00:26:53.271 END TEST nvmf_target_disconnect 00:26:53.271 ************************************ 00:26:53.271 17:08:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:53.271 17:08:59 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:26:53.271 17:08:59 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:53.271 17:08:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.271 17:08:59 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:26:53.271 00:26:53.271 real 20m55.372s 00:26:53.271 user 45m6.415s 00:26:53.271 sys 6m20.601s 00:26:53.271 17:08:59 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:53.271 17:08:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.271 ************************************ 00:26:53.271 END TEST nvmf_tcp 00:26:53.271 ************************************ 00:26:53.530 17:08:59 -- common/autotest_common.sh@1142 -- # return 0 00:26:53.530 17:08:59 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:26:53.531 17:08:59 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:53.531 17:08:59 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:53.531 17:08:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.531 17:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:53.531 ************************************ 00:26:53.531 START TEST spdkcli_nvmf_tcp 00:26:53.531 ************************************ 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:53.531 * Looking for test storage... 00:26:53.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=240596 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 240596 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 240596 ']' 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:53.531 17:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.531 [2024-07-15 17:09:00.171117] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:26:53.531 [2024-07-15 17:09:00.171169] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid240596 ] 00:26:53.531 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.790 [2024-07-15 17:09:00.226430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:53.790 [2024-07-15 17:09:00.302046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.790 [2024-07-15 17:09:00.302049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.358 17:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:54.359 17:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:26:54.359 17:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:54.359 17:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:54.359 17:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.359 17:09:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:54.359 17:09:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:54.359 17:09:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:54.359 17:09:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:54.359 17:09:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.359 17:09:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:54.359 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:54.359 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:54.359 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:54.359 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:54.359 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:54.359 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:54.359 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:54.359 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:54.359 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:54.359 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:54.359 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:54.359 ' 00:26:56.894 [2024-07-15 17:09:03.388855] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.272 [2024-07-15 17:09:04.564765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:00.179 [2024-07-15 17:09:06.727339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:02.085 [2024-07-15 17:09:08.585105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:03.463 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:03.463 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:03.463 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:03.463 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:03.463 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:03.463 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:03.463 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:03.463 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:03.463 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:03.463 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:03.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:03.463 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:03.463 17:09:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:03.721 17:09:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:03.721 17:09:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.721 17:09:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:03.721 17:09:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.721 17:09:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.721 17:09:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:03.721 17:09:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:03.980 17:09:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:03.980 17:09:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:03.980 17:09:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:03.980 17:09:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:03.980 17:09:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.980 17:09:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:03.980 17:09:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.980 17:09:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.980 17:09:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:03.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:03.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:03.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:03.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:03.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:03.980 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:03.980 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:03.980 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:03.980 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:03.980 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:03.980 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:03.980 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:03.980 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:03.980 ' 00:27:09.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:09.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:09.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:09.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:09.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:09.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:09.251 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:09.252 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:09.252 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:09.252 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:09.252 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:09.252 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:09.252 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:09.252 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 240596 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 240596 ']' 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 240596 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 240596 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 240596' 00:27:09.252 killing process with pid 240596 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 240596 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 240596 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 240596 ']' 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 240596 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 240596 ']' 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 240596 00:27:09.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (240596) - No such process 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 240596 is not found' 00:27:09.252 Process with pid 240596 is not found 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:09.252 00:27:09.252 real 0m15.806s 00:27:09.252 user 0m32.751s 00:27:09.252 sys 0m0.694s 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:09.252 17:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:09.252 ************************************ 00:27:09.252 END TEST spdkcli_nvmf_tcp 00:27:09.252 ************************************ 00:27:09.252 17:09:15 -- common/autotest_common.sh@1142 -- # return 0 00:27:09.252 17:09:15 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:09.252 17:09:15 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:09.252 17:09:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.252 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:27:09.252 ************************************ 00:27:09.252 START TEST nvmf_identify_passthru 00:27:09.252 ************************************ 00:27:09.252 17:09:15 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:09.510 * Looking for test storage... 00:27:09.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:09.510 17:09:15 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.510 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.511 17:09:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.511 17:09:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.511 17:09:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.511 17:09:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.511 17:09:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.511 17:09:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.511 17:09:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:09.511 17:09:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:09.511 17:09:15 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.511 17:09:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.511 17:09:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.511 17:09:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.511 17:09:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.511 17:09:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.511 17:09:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.511 17:09:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:09.511 17:09:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.511 17:09:15 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.511 17:09:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:09.511 17:09:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:09.511 17:09:15 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:09.511 17:09:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:14.803 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.803 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:14.803 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:14.803 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:14.803 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:14.804 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:14.804 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:14.804 Found net devices under 0000:86:00.0: cvl_0_0 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:14.804 Found net devices under 0000:86:00.1: cvl_0_1 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:14.804 17:09:20 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.804 17:09:21 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.804 17:09:21 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.804 17:09:21 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:14.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:27:14.804 00:27:14.804 --- 10.0.0.2 ping statistics --- 00:27:14.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.804 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:27:14.804 17:09:21 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:27:14.804 00:27:14.804 --- 10.0.0.1 ping statistics --- 00:27:14.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.804 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:27:14.804 17:09:21 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.804 17:09:21 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:14.804 17:09:21 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:14.804 17:09:21 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.804 17:09:21 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:14.804 17:09:21 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:14.804 17:09:21 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.804 17:09:21 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:14.804 17:09:21 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:14.804 17:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:14.804 17:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:27:14.804 17:09:21 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:27:14.804 17:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:27:14.804 17:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:27:14.804 17:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:14.804 17:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:14.804 17:09:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:14.804 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.998 17:09:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:27:18.998 17:09:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:18.998 17:09:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:18.998 17:09:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:18.998 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.192 17:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:23.192 17:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:23.192 17:09:29 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:23.192 17:09:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:23.192 17:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:23.192 17:09:29 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:23.192 17:09:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:23.192 17:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:23.192 17:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=248111 00:27:23.192 17:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:23.192 17:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 248111 00:27:23.192 17:09:29 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 248111 ']' 00:27:23.192 17:09:29 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.192 17:09:29 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:23.192 17:09:29 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.192 17:09:29 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:23.192 17:09:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:23.192 [2024-07-15 17:09:29.461328] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:27:23.192 [2024-07-15 17:09:29.461375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.192 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.192 [2024-07-15 17:09:29.513517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:23.192 [2024-07-15 17:09:29.593587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.192 [2024-07-15 17:09:29.593622] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.192 [2024-07-15 17:09:29.593629] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.192 [2024-07-15 17:09:29.593635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.192 [2024-07-15 17:09:29.593641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.192 [2024-07-15 17:09:29.593689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.192 [2024-07-15 17:09:29.593709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:23.192 [2024-07-15 17:09:29.593795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:23.192 [2024-07-15 17:09:29.593796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:27:23.760 17:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:23.760 INFO: Log level set to 20 00:27:23.760 INFO: Requests: 00:27:23.760 { 00:27:23.760 "jsonrpc": "2.0", 00:27:23.760 "method": "nvmf_set_config", 00:27:23.760 "id": 1, 00:27:23.760 "params": { 00:27:23.760 "admin_cmd_passthru": { 00:27:23.760 "identify_ctrlr": true 00:27:23.760 } 00:27:23.760 } 00:27:23.760 } 00:27:23.760 00:27:23.760 INFO: response: 00:27:23.760 { 00:27:23.760 "jsonrpc": "2.0", 00:27:23.760 "id": 1, 00:27:23.760 "result": true 00:27:23.760 } 00:27:23.760 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.760 17:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:23.760 INFO: Setting log level to 20 00:27:23.760 INFO: Setting log level to 20 00:27:23.760 INFO: Log level set to 20 00:27:23.760 INFO: Log level set to 20 00:27:23.760 INFO: Requests: 00:27:23.760 { 00:27:23.760 "jsonrpc": "2.0", 00:27:23.760 "method": "framework_start_init", 00:27:23.760 "id": 1 00:27:23.760 } 00:27:23.760 00:27:23.760 INFO: Requests: 00:27:23.760 { 00:27:23.760 "jsonrpc": "2.0", 00:27:23.760 "method": "framework_start_init", 00:27:23.760 "id": 1 00:27:23.760 } 00:27:23.760 00:27:23.760 [2024-07-15 17:09:30.370067] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:23.760 INFO: response: 00:27:23.760 { 00:27:23.760 "jsonrpc": "2.0", 00:27:23.760 "id": 1, 00:27:23.760 "result": true 00:27:23.760 } 00:27:23.760 00:27:23.760 INFO: response: 00:27:23.760 { 00:27:23.760 "jsonrpc": "2.0", 00:27:23.760 "id": 1, 00:27:23.760 "result": true 00:27:23.760 } 00:27:23.760 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.760 17:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:23.760 INFO: Setting log level to 40 00:27:23.760 INFO: Setting log level to 40 00:27:23.760 INFO: Setting log level to 40 00:27:23.760 [2024-07-15 17:09:30.379553] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.760 17:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:23.760 17:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.760 17:09:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:27.049 Nvme0n1 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:27.049 [2024-07-15 17:09:33.272483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:27.049 [ 00:27:27.049 { 00:27:27.049 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:27.049 "subtype": "Discovery", 00:27:27.049 "listen_addresses": [], 00:27:27.049 "allow_any_host": true, 00:27:27.049 "hosts": [] 00:27:27.049 }, 00:27:27.049 { 00:27:27.049 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:27.049 "subtype": "NVMe", 00:27:27.049 "listen_addresses": [ 00:27:27.049 { 00:27:27.049 "trtype": "TCP", 00:27:27.049 "adrfam": "IPv4", 00:27:27.049 "traddr": "10.0.0.2", 00:27:27.049 "trsvcid": "4420" 00:27:27.049 } 00:27:27.049 ], 00:27:27.049 "allow_any_host": true, 00:27:27.049 "hosts": [], 00:27:27.049 "serial_number": "SPDK00000000000001", 00:27:27.049 "model_number": "SPDK bdev Controller", 00:27:27.049 "max_namespaces": 1, 00:27:27.049 "min_cntlid": 1, 00:27:27.049 "max_cntlid": 65519, 00:27:27.049 "namespaces": [ 00:27:27.049 { 00:27:27.049 "nsid": 1, 00:27:27.049 "bdev_name": "Nvme0n1", 00:27:27.049 "name": "Nvme0n1", 00:27:27.049 "nguid": "8E8D44CA2673495CB2D306F47C01786D", 00:27:27.049 "uuid": "8e8d44ca-2673-495c-b2d3-06f47c01786d" 00:27:27.049 } 00:27:27.049 ] 00:27:27.049 } 00:27:27.049 ] 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:27.049 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:27.049 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:27.049 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:27.049 17:09:33 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:27.049 17:09:33 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:27.049 17:09:33 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:27.049 17:09:33 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:27.049 17:09:33 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:27.049 17:09:33 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:27.049 17:09:33 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:27.049 rmmod nvme_tcp 00:27:27.307 rmmod nvme_fabrics 00:27:27.307 rmmod nvme_keyring 00:27:27.307 17:09:33 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:27.307 17:09:33 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:27.307 17:09:33 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:27.307 17:09:33 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 248111 ']' 00:27:27.307 17:09:33 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 248111 00:27:27.307 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 248111 ']' 00:27:27.307 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 248111 00:27:27.307 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:27:27.307 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:27.307 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 248111 00:27:27.307 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:27.307 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:27.307 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 248111' 00:27:27.307 killing process with pid 248111 00:27:27.307 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 248111 00:27:27.307 17:09:33 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 248111 00:27:28.681 17:09:35 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:28.681 17:09:35 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:28.681 17:09:35 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:28.681 17:09:35 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:28.681 17:09:35 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:28.681 17:09:35 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.681 17:09:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:28.681 17:09:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.214 17:09:37 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.214 00:27:31.214 real 0m21.483s 00:27:31.214 user 0m29.944s 00:27:31.214 sys 0m4.661s 00:27:31.214 17:09:37 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:31.214 17:09:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:31.214 ************************************ 00:27:31.214 END TEST nvmf_identify_passthru 00:27:31.214 ************************************ 00:27:31.214 17:09:37 -- common/autotest_common.sh@1142 -- # return 0 00:27:31.214 17:09:37 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:31.214 17:09:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:31.214 17:09:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.214 17:09:37 -- common/autotest_common.sh@10 -- # set +x 00:27:31.214 ************************************ 00:27:31.214 START TEST nvmf_dif 00:27:31.214 ************************************ 00:27:31.214 17:09:37 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:31.214 * Looking for test storage... 00:27:31.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:31.214 17:09:37 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.214 17:09:37 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.214 17:09:37 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.214 17:09:37 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.214 17:09:37 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.214 17:09:37 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.214 17:09:37 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.214 17:09:37 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:31.214 17:09:37 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.214 17:09:37 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:31.214 17:09:37 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:31.214 17:09:37 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:31.214 17:09:37 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:31.214 17:09:37 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.214 17:09:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:31.214 17:09:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:31.214 17:09:37 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:31.214 17:09:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:36.491 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:36.491 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:36.491 Found net devices under 0000:86:00.0: cvl_0_0 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:36.491 Found net devices under 0000:86:00.1: cvl_0_1 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.491 17:09:42 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.492 17:09:42 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.492 17:09:42 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:36.492 17:09:42 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.492 17:09:42 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.492 17:09:42 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.492 17:09:42 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:36.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:27:36.492 00:27:36.492 --- 10.0.0.2 ping statistics --- 00:27:36.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.492 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:27:36.492 17:09:42 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:27:36.492 00:27:36.492 --- 10.0.0.1 ping statistics --- 00:27:36.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.492 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:27:36.492 17:09:42 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.492 17:09:42 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:36.492 17:09:42 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:36.492 17:09:42 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:38.429 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:38.429 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:38.429 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:38.429 17:09:45 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.429 17:09:45 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:38.429 17:09:45 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:38.429 17:09:45 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.429 17:09:45 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:38.429 17:09:45 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:38.429 17:09:45 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:38.429 17:09:45 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:38.429 17:09:45 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:38.429 17:09:45 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:38.429 17:09:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:38.429 17:09:45 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=253566 00:27:38.429 17:09:45 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 253566 00:27:38.429 17:09:45 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:38.429 17:09:45 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 253566 ']' 00:27:38.429 17:09:45 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.429 17:09:45 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:38.429 17:09:45 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.429 17:09:45 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:38.429 17:09:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:38.709 [2024-07-15 17:09:45.105712] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:27:38.709 [2024-07-15 17:09:45.105755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.709 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.709 [2024-07-15 17:09:45.162497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.709 [2024-07-15 17:09:45.240948] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.709 [2024-07-15 17:09:45.240984] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.709 [2024-07-15 17:09:45.240991] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.709 [2024-07-15 17:09:45.240997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.709 [2024-07-15 17:09:45.241002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.709 [2024-07-15 17:09:45.241019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.277 17:09:45 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.277 17:09:45 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:27:39.277 17:09:45 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:39.277 17:09:45 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:39.277 17:09:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:39.277 17:09:45 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.277 17:09:45 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:39.277 17:09:45 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:39.277 17:09:45 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.277 17:09:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:39.537 [2024-07-15 17:09:45.947009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.537 17:09:45 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.537 17:09:45 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:39.537 17:09:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:39.537 17:09:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.537 17:09:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:39.537 ************************************ 00:27:39.537 START TEST fio_dif_1_default 00:27:39.537 ************************************ 00:27:39.537 17:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:27:39.537 17:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:39.537 17:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:39.537 17:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:39.537 17:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:39.537 17:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:39.537 17:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:39.537 17:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.537 17:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:39.537 bdev_null0 00:27:39.537 17:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.537 17:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:39.537 17:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.537 17:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:39.537 [2024-07-15 17:09:46.019309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:39.537 { 00:27:39.537 "params": { 00:27:39.537 "name": "Nvme$subsystem", 00:27:39.537 "trtype": "$TEST_TRANSPORT", 00:27:39.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.537 "adrfam": "ipv4", 00:27:39.537 "trsvcid": "$NVMF_PORT", 00:27:39.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.537 "hdgst": ${hdgst:-false}, 00:27:39.537 "ddgst": ${ddgst:-false} 00:27:39.537 }, 00:27:39.537 "method": "bdev_nvme_attach_controller" 00:27:39.537 } 00:27:39.537 EOF 00:27:39.537 )") 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:39.537 17:09:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:39.538 "params": { 00:27:39.538 "name": "Nvme0", 00:27:39.538 "trtype": "tcp", 00:27:39.538 "traddr": "10.0.0.2", 00:27:39.538 "adrfam": "ipv4", 00:27:39.538 "trsvcid": "4420", 00:27:39.538 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.538 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:39.538 "hdgst": false, 00:27:39.538 "ddgst": false 00:27:39.538 }, 00:27:39.538 "method": "bdev_nvme_attach_controller" 00:27:39.538 }' 00:27:39.538 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:39.538 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:39.538 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:39.538 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:39.538 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:39.538 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:39.538 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:39.538 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:39.538 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:39.538 17:09:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:39.796 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:39.796 fio-3.35 00:27:39.796 Starting 1 thread 00:27:39.796 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.011 00:27:52.011 filename0: (groupid=0, jobs=1): err= 0: pid=253947: Mon Jul 15 17:09:56 2024 00:27:52.011 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10020msec) 00:27:52.011 slat (nsec): min=4220, max=21304, avg=6168.37, stdev=1103.24 00:27:52.011 clat (usec): min=40803, max=48392, avg=41046.03, stdev=505.43 00:27:52.011 lat (usec): min=40809, max=48406, avg=41052.20, stdev=505.49 00:27:52.011 clat percentiles (usec): 00:27:52.011 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:27:52.011 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:52.011 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:52.011 | 99.00th=[42206], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:27:52.011 | 99.99th=[48497] 00:27:52.011 bw ( KiB/s): min= 384, max= 416, per=99.58%, avg=388.80, stdev=11.72, samples=20 00:27:52.011 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:27:52.011 lat (msec) : 50=100.00% 00:27:52.011 cpu : usr=95.01%, sys=4.74%, ctx=9, majf=0, minf=215 00:27:52.011 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.011 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.011 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:52.011 00:27:52.011 Run status group 0 (all jobs): 00:27:52.011 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10020-10020msec 00:27:52.011 17:09:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:52.011 17:09:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:52.011 17:09:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:52.011 17:09:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:52.011 17:09:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:52.011 17:09:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:52.011 17:09:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.011 17:09:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.011 00:27:52.011 real 0m11.029s 00:27:52.011 user 0m15.782s 00:27:52.011 sys 0m0.766s 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:52.011 ************************************ 00:27:52.011 END TEST fio_dif_1_default 00:27:52.011 ************************************ 00:27:52.011 17:09:57 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:52.011 17:09:57 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:52.011 17:09:57 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:52.011 17:09:57 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.011 17:09:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:52.011 ************************************ 00:27:52.011 START TEST fio_dif_1_multi_subsystems 00:27:52.011 ************************************ 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.011 bdev_null0 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.011 [2024-07-15 17:09:57.123328] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.011 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.011 bdev_null1 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.012 { 00:27:52.012 "params": { 00:27:52.012 "name": "Nvme$subsystem", 00:27:52.012 "trtype": "$TEST_TRANSPORT", 00:27:52.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.012 "adrfam": "ipv4", 00:27:52.012 "trsvcid": "$NVMF_PORT", 00:27:52.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.012 "hdgst": ${hdgst:-false}, 00:27:52.012 "ddgst": ${ddgst:-false} 00:27:52.012 }, 00:27:52.012 "method": "bdev_nvme_attach_controller" 00:27:52.012 } 00:27:52.012 EOF 00:27:52.012 )") 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.012 { 00:27:52.012 "params": { 00:27:52.012 "name": "Nvme$subsystem", 00:27:52.012 "trtype": "$TEST_TRANSPORT", 00:27:52.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.012 "adrfam": "ipv4", 00:27:52.012 "trsvcid": "$NVMF_PORT", 00:27:52.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.012 "hdgst": ${hdgst:-false}, 00:27:52.012 "ddgst": ${ddgst:-false} 00:27:52.012 }, 00:27:52.012 "method": "bdev_nvme_attach_controller" 00:27:52.012 } 00:27:52.012 EOF 00:27:52.012 )") 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:52.012 "params": { 00:27:52.012 "name": "Nvme0", 00:27:52.012 "trtype": "tcp", 00:27:52.012 "traddr": "10.0.0.2", 00:27:52.012 "adrfam": "ipv4", 00:27:52.012 "trsvcid": "4420", 00:27:52.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.012 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:52.012 "hdgst": false, 00:27:52.012 "ddgst": false 00:27:52.012 }, 00:27:52.012 "method": "bdev_nvme_attach_controller" 00:27:52.012 },{ 00:27:52.012 "params": { 00:27:52.012 "name": "Nvme1", 00:27:52.012 "trtype": "tcp", 00:27:52.012 "traddr": "10.0.0.2", 00:27:52.012 "adrfam": "ipv4", 00:27:52.012 "trsvcid": "4420", 00:27:52.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:52.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:52.012 "hdgst": false, 00:27:52.012 "ddgst": false 00:27:52.012 }, 00:27:52.012 "method": "bdev_nvme_attach_controller" 00:27:52.012 }' 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:52.012 17:09:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.012 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:52.012 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:52.012 fio-3.35 00:27:52.012 Starting 2 threads 00:27:52.012 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.991 00:28:01.991 filename0: (groupid=0, jobs=1): err= 0: pid=255915: Mon Jul 15 17:10:08 2024 00:28:01.991 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10001msec) 00:28:01.991 slat (nsec): min=6033, max=58816, avg=7345.89, stdev=2392.37 00:28:01.991 clat (usec): min=550, max=42506, avg=21032.93, stdev=20395.41 00:28:01.991 lat (usec): min=556, max=42513, avg=21040.27, stdev=20394.74 00:28:01.991 clat percentiles (usec): 00:28:01.991 | 1.00th=[ 562], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 578], 00:28:01.991 | 30.00th=[ 578], 40.00th=[ 635], 50.00th=[41157], 60.00th=[41157], 00:28:01.991 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:28:01.991 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:28:01.991 | 99.99th=[42730] 00:28:01.991 bw ( KiB/s): min= 702, max= 768, per=66.32%, avg=761.16, stdev=20.50, samples=19 00:28:01.991 iops : min= 175, max= 192, avg=190.26, stdev= 5.21, samples=19 00:28:01.991 lat (usec) : 750=48.74%, 1000=1.16% 00:28:01.991 lat (msec) : 50=50.11% 00:28:01.991 cpu : usr=97.70%, sys=2.05%, ctx=10, majf=0, minf=178 00:28:01.991 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:01.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.991 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.991 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:01.991 filename1: (groupid=0, jobs=1): err= 0: pid=255916: Mon Jul 15 17:10:08 2024 00:28:01.991 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10025msec) 00:28:01.991 slat (nsec): min=6026, max=29887, avg=8100.47, stdev=2849.22 00:28:01.991 clat (usec): min=40837, max=42513, avg=41060.18, stdev=279.60 00:28:01.991 lat (usec): min=40844, max=42543, avg=41068.28, stdev=280.55 00:28:01.991 clat percentiles (usec): 00:28:01.991 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:28:01.991 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:01.991 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:28:01.991 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:28:01.991 | 99.99th=[42730] 00:28:01.991 bw ( KiB/s): min= 384, max= 416, per=33.81%, avg=388.80, stdev=11.72, samples=20 00:28:01.991 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:28:01.991 lat (msec) : 50=100.00% 00:28:01.991 cpu : usr=97.91%, sys=1.85%, ctx=8, majf=0, minf=106 00:28:01.991 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:01.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.991 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.991 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:01.991 00:28:01.991 Run status group 0 (all jobs): 00:28:01.991 READ: bw=1148KiB/s (1175kB/s), 389KiB/s-760KiB/s (399kB/s-778kB/s), io=11.2MiB (11.8MB), run=10001-10025msec 00:28:01.991 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:01.991 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:01.991 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:01.991 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:01.991 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.992 00:28:01.992 real 0m11.482s 00:28:01.992 user 0m26.596s 00:28:01.992 sys 0m0.757s 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:01.992 17:10:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:01.992 ************************************ 00:28:01.992 END TEST fio_dif_1_multi_subsystems 00:28:01.992 ************************************ 00:28:01.992 17:10:08 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:01.992 17:10:08 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:01.992 17:10:08 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:01.992 17:10:08 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:01.992 17:10:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:01.992 ************************************ 00:28:01.992 START TEST fio_dif_rand_params 00:28:01.992 ************************************ 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.992 bdev_null0 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.992 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.251 [2024-07-15 17:10:08.672735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:02.251 17:10:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.251 { 00:28:02.251 "params": { 00:28:02.251 "name": "Nvme$subsystem", 00:28:02.251 "trtype": "$TEST_TRANSPORT", 00:28:02.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.251 "adrfam": "ipv4", 00:28:02.251 "trsvcid": "$NVMF_PORT", 00:28:02.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.251 "hdgst": ${hdgst:-false}, 00:28:02.252 "ddgst": ${ddgst:-false} 00:28:02.252 }, 00:28:02.252 "method": "bdev_nvme_attach_controller" 00:28:02.252 } 00:28:02.252 EOF 00:28:02.252 )") 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:02.252 "params": { 00:28:02.252 "name": "Nvme0", 00:28:02.252 "trtype": "tcp", 00:28:02.252 "traddr": "10.0.0.2", 00:28:02.252 "adrfam": "ipv4", 00:28:02.252 "trsvcid": "4420", 00:28:02.252 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.252 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:02.252 "hdgst": false, 00:28:02.252 "ddgst": false 00:28:02.252 }, 00:28:02.252 "method": "bdev_nvme_attach_controller" 00:28:02.252 }' 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:02.252 17:10:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.511 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:02.511 ... 00:28:02.511 fio-3.35 00:28:02.511 Starting 3 threads 00:28:02.511 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.079 00:28:09.079 filename0: (groupid=0, jobs=1): err= 0: pid=257879: Mon Jul 15 17:10:14 2024 00:28:09.079 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(187MiB/5008msec) 00:28:09.079 slat (nsec): min=6219, max=32198, avg=9744.78, stdev=2733.67 00:28:09.079 clat (usec): min=3815, max=53889, avg=10033.82, stdev=10473.58 00:28:09.079 lat (usec): min=3821, max=53905, avg=10043.57, stdev=10473.84 00:28:09.079 clat percentiles (usec): 00:28:09.079 | 1.00th=[ 4178], 5.00th=[ 4424], 10.00th=[ 4686], 20.00th=[ 5735], 00:28:09.079 | 30.00th=[ 6259], 40.00th=[ 6652], 50.00th=[ 7177], 60.00th=[ 8029], 00:28:09.079 | 70.00th=[ 8717], 80.00th=[ 9503], 90.00th=[10945], 95.00th=[47973], 00:28:09.079 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51643], 99.95th=[53740], 00:28:09.079 | 99.99th=[53740] 00:28:09.079 bw ( KiB/s): min=30464, max=47616, per=35.24%, avg=38220.80, stdev=5591.84, samples=10 00:28:09.079 iops : min= 238, max= 372, avg=298.60, stdev=43.69, samples=10 00:28:09.079 lat (msec) : 4=0.20%, 10=83.81%, 20=9.36%, 50=5.75%, 100=0.87% 00:28:09.079 cpu : usr=95.45%, sys=4.21%, ctx=10, majf=0, minf=62 00:28:09.079 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:09.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.079 issued rwts: total=1495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:09.079 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:09.079 filename0: (groupid=0, jobs=1): err= 0: pid=257880: Mon Jul 15 17:10:14 2024 00:28:09.079 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(166MiB/5002msec) 00:28:09.079 slat (nsec): min=6252, max=25388, avg=9997.99, stdev=2748.80 00:28:09.079 clat (usec): min=3827, max=54860, avg=11286.05, stdev=12135.73 00:28:09.080 lat (usec): min=3833, max=54873, avg=11296.05, stdev=12135.97 00:28:09.080 clat percentiles (usec): 00:28:09.080 | 1.00th=[ 4047], 5.00th=[ 4424], 10.00th=[ 4621], 20.00th=[ 5800], 00:28:09.080 | 30.00th=[ 6390], 40.00th=[ 6849], 50.00th=[ 7701], 60.00th=[ 8455], 00:28:09.080 | 70.00th=[ 9110], 80.00th=[10159], 90.00th=[12256], 95.00th=[49021], 00:28:09.080 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53216], 99.95th=[54789], 00:28:09.080 | 99.99th=[54789] 00:28:09.080 bw ( KiB/s): min=19200, max=47360, per=30.59%, avg=33176.44, stdev=10768.49, samples=9 00:28:09.080 iops : min= 150, max= 370, avg=259.11, stdev=84.02, samples=9 00:28:09.080 lat (msec) : 4=0.45%, 10=78.24%, 20=12.27%, 50=6.33%, 100=2.71% 00:28:09.080 cpu : usr=96.36%, sys=3.30%, ctx=7, majf=0, minf=68 00:28:09.080 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:09.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.080 issued rwts: total=1328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:09.080 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:09.080 filename0: (groupid=0, jobs=1): err= 0: pid=257881: Mon Jul 15 17:10:14 2024 00:28:09.080 read: IOPS=287, BW=36.0MiB/s (37.7MB/s)(182MiB/5045msec) 00:28:09.080 slat (nsec): min=6224, max=83633, avg=10007.57, stdev=3354.64 00:28:09.080 clat (usec): min=3887, max=52874, avg=10380.38, stdev=10825.64 00:28:09.080 lat (usec): min=3894, max=52887, avg=10390.39, stdev=10825.82 00:28:09.080 clat percentiles (usec): 00:28:09.080 | 1.00th=[ 4178], 5.00th=[ 4359], 10.00th=[ 4555], 20.00th=[ 5932], 00:28:09.080 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7373], 60.00th=[ 8225], 00:28:09.080 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[11207], 95.00th=[47973], 00:28:09.080 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52167], 99.95th=[52691], 00:28:09.080 | 99.99th=[52691] 00:28:09.080 bw ( KiB/s): min=27136, max=50688, per=34.22%, avg=37120.00, stdev=7473.37, samples=10 00:28:09.080 iops : min= 212, max= 396, avg=290.00, stdev=58.39, samples=10 00:28:09.080 lat (msec) : 4=0.28%, 10=82.64%, 20=9.92%, 50=5.51%, 100=1.65% 00:28:09.080 cpu : usr=96.05%, sys=3.61%, ctx=10, majf=0, minf=146 00:28:09.080 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:09.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.080 issued rwts: total=1452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:09.080 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:09.080 00:28:09.080 Run status group 0 (all jobs): 00:28:09.080 READ: bw=106MiB/s (111MB/s), 33.2MiB/s-37.3MiB/s (34.8MB/s-39.1MB/s), io=534MiB (560MB), run=5002-5045msec 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 bdev_null0 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 [2024-07-15 17:10:14.947534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 bdev_null1 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 bdev_null2 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:09.080 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.080 { 00:28:09.080 "params": { 00:28:09.080 "name": "Nvme$subsystem", 00:28:09.080 "trtype": "$TEST_TRANSPORT", 00:28:09.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.080 "adrfam": "ipv4", 00:28:09.080 "trsvcid": "$NVMF_PORT", 00:28:09.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.080 "hdgst": ${hdgst:-false}, 00:28:09.081 "ddgst": ${ddgst:-false} 00:28:09.081 }, 00:28:09.081 "method": "bdev_nvme_attach_controller" 00:28:09.081 } 00:28:09.081 EOF 00:28:09.081 )") 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.081 { 00:28:09.081 "params": { 00:28:09.081 "name": "Nvme$subsystem", 00:28:09.081 "trtype": "$TEST_TRANSPORT", 00:28:09.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.081 "adrfam": "ipv4", 00:28:09.081 "trsvcid": "$NVMF_PORT", 00:28:09.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.081 "hdgst": ${hdgst:-false}, 00:28:09.081 "ddgst": ${ddgst:-false} 00:28:09.081 }, 00:28:09.081 "method": "bdev_nvme_attach_controller" 00:28:09.081 } 00:28:09.081 EOF 00:28:09.081 )") 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.081 { 00:28:09.081 "params": { 00:28:09.081 "name": "Nvme$subsystem", 00:28:09.081 "trtype": "$TEST_TRANSPORT", 00:28:09.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.081 "adrfam": "ipv4", 00:28:09.081 "trsvcid": "$NVMF_PORT", 00:28:09.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.081 "hdgst": ${hdgst:-false}, 00:28:09.081 "ddgst": ${ddgst:-false} 00:28:09.081 }, 00:28:09.081 "method": "bdev_nvme_attach_controller" 00:28:09.081 } 00:28:09.081 EOF 00:28:09.081 )") 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:09.081 "params": { 00:28:09.081 "name": "Nvme0", 00:28:09.081 "trtype": "tcp", 00:28:09.081 "traddr": "10.0.0.2", 00:28:09.081 "adrfam": "ipv4", 00:28:09.081 "trsvcid": "4420", 00:28:09.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:09.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:09.081 "hdgst": false, 00:28:09.081 "ddgst": false 00:28:09.081 }, 00:28:09.081 "method": "bdev_nvme_attach_controller" 00:28:09.081 },{ 00:28:09.081 "params": { 00:28:09.081 "name": "Nvme1", 00:28:09.081 "trtype": "tcp", 00:28:09.081 "traddr": "10.0.0.2", 00:28:09.081 "adrfam": "ipv4", 00:28:09.081 "trsvcid": "4420", 00:28:09.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:09.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:09.081 "hdgst": false, 00:28:09.081 "ddgst": false 00:28:09.081 }, 00:28:09.081 "method": "bdev_nvme_attach_controller" 00:28:09.081 },{ 00:28:09.081 "params": { 00:28:09.081 "name": "Nvme2", 00:28:09.081 "trtype": "tcp", 00:28:09.081 "traddr": "10.0.0.2", 00:28:09.081 "adrfam": "ipv4", 00:28:09.081 "trsvcid": "4420", 00:28:09.081 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:09.081 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:09.081 "hdgst": false, 00:28:09.081 "ddgst": false 00:28:09.081 }, 00:28:09.081 "method": "bdev_nvme_attach_controller" 00:28:09.081 }' 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:09.081 17:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:09.081 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:09.081 ... 00:28:09.081 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:09.081 ... 00:28:09.081 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:09.081 ... 00:28:09.081 fio-3.35 00:28:09.081 Starting 24 threads 00:28:09.081 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.285 00:28:21.285 filename0: (groupid=0, jobs=1): err= 0: pid=259047: Mon Jul 15 17:10:26 2024 00:28:21.285 read: IOPS=584, BW=2339KiB/s (2395kB/s)(22.9MiB/10009msec) 00:28:21.285 slat (usec): min=6, max=127, avg=32.17, stdev=21.72 00:28:21.285 clat (usec): min=2293, max=37430, avg=27116.01, stdev=3802.61 00:28:21.285 lat (usec): min=2302, max=37453, avg=27148.17, stdev=3804.61 00:28:21.285 clat percentiles (usec): 00:28:21.285 | 1.00th=[ 2638], 5.00th=[26608], 10.00th=[27132], 20.00th=[27395], 00:28:21.285 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:28:21.285 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.285 | 99.00th=[30540], 99.50th=[31065], 99.90th=[32113], 99.95th=[32113], 00:28:21.285 | 99.99th=[37487] 00:28:21.285 bw ( KiB/s): min= 2176, max= 3200, per=4.24%, avg=2334.80, stdev=207.39, samples=20 00:28:21.285 iops : min= 544, max= 800, avg=583.70, stdev=51.85, samples=20 00:28:21.285 lat (msec) : 4=1.64%, 10=0.77%, 20=0.31%, 50=97.28% 00:28:21.285 cpu : usr=98.68%, sys=0.88%, ctx=27, majf=0, minf=38 00:28:21.285 IO depths : 1=5.5%, 2=11.3%, 4=24.0%, 8=52.2%, 16=7.1%, 32=0.0%, >=64=0.0% 00:28:21.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.285 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.285 issued rwts: total=5853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.285 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.285 filename0: (groupid=0, jobs=1): err= 0: pid=259048: Mon Jul 15 17:10:26 2024 00:28:21.285 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:28:21.285 slat (usec): min=6, max=117, avg=34.61, stdev=20.76 00:28:21.285 clat (usec): min=17600, max=36175, avg=27701.97, stdev=836.54 00:28:21.285 lat (usec): min=17619, max=36211, avg=27736.58, stdev=832.57 00:28:21.285 clat percentiles (usec): 00:28:21.285 | 1.00th=[26608], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:21.285 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.285 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.285 | 99.00th=[29492], 99.50th=[30540], 99.90th=[35914], 99.95th=[35914], 00:28:21.285 | 99.99th=[36439] 00:28:21.285 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:28:21.285 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:28:21.285 lat (msec) : 20=0.28%, 50=99.72% 00:28:21.285 cpu : usr=98.27%, sys=1.32%, ctx=27, majf=0, minf=34 00:28:21.285 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:21.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.285 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.285 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.285 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.285 filename0: (groupid=0, jobs=1): err= 0: pid=259049: Mon Jul 15 17:10:26 2024 00:28:21.285 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10003msec) 00:28:21.285 slat (nsec): min=6815, max=52977, avg=14387.17, stdev=4829.69 00:28:21.285 clat (usec): min=16472, max=64364, avg=27892.94, stdev=2094.29 00:28:21.286 lat (usec): min=16487, max=64384, avg=27907.33, stdev=2094.05 00:28:21.286 clat percentiles (usec): 00:28:21.286 | 1.00th=[25560], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:21.286 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:28:21.286 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:28:21.286 | 99.00th=[29754], 99.50th=[30540], 99.90th=[64226], 99.95th=[64226], 00:28:21.286 | 99.99th=[64226] 00:28:21.286 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2277.05, stdev=68.52, samples=19 00:28:21.286 iops : min= 512, max= 576, avg=569.26, stdev=17.13, samples=19 00:28:21.286 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:28:21.286 cpu : usr=98.78%, sys=0.82%, ctx=15, majf=0, minf=27 00:28:21.286 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.286 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.286 filename0: (groupid=0, jobs=1): err= 0: pid=259050: Mon Jul 15 17:10:26 2024 00:28:21.286 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:28:21.286 slat (usec): min=7, max=107, avg=35.82, stdev=19.97 00:28:21.286 clat (usec): min=17388, max=36316, avg=27695.70, stdev=838.13 00:28:21.286 lat (usec): min=17421, max=36344, avg=27731.52, stdev=834.51 00:28:21.286 clat percentiles (usec): 00:28:21.286 | 1.00th=[26608], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:28:21.286 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.286 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.286 | 99.00th=[29230], 99.50th=[30540], 99.90th=[35914], 99.95th=[36439], 00:28:21.286 | 99.99th=[36439] 00:28:21.286 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:28:21.286 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:28:21.286 lat (msec) : 20=0.28%, 50=99.72% 00:28:21.286 cpu : usr=98.30%, sys=1.30%, ctx=20, majf=0, minf=37 00:28:21.286 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.286 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.286 filename0: (groupid=0, jobs=1): err= 0: pid=259051: Mon Jul 15 17:10:26 2024 00:28:21.286 read: IOPS=567, BW=2268KiB/s (2323kB/s)(22.2MiB/10003msec) 00:28:21.286 slat (usec): min=6, max=122, avg=34.80, stdev=22.29 00:28:21.286 clat (usec): min=4447, max=81453, avg=27933.28, stdev=4023.52 00:28:21.286 lat (usec): min=4454, max=81495, avg=27968.07, stdev=4024.05 00:28:21.286 clat percentiles (usec): 00:28:21.286 | 1.00th=[15270], 5.00th=[26608], 10.00th=[27132], 20.00th=[27395], 00:28:21.286 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.286 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[30278], 00:28:21.286 | 99.00th=[40633], 99.50th=[45876], 99.90th=[68682], 99.95th=[81265], 00:28:21.286 | 99.99th=[81265] 00:28:21.286 bw ( KiB/s): min= 1968, max= 2368, per=4.10%, avg=2261.05, stdev=88.13, samples=19 00:28:21.286 iops : min= 492, max= 592, avg=565.26, stdev=22.03, samples=19 00:28:21.286 lat (msec) : 10=0.25%, 20=1.90%, 50=97.57%, 100=0.28% 00:28:21.286 cpu : usr=98.92%, sys=0.67%, ctx=19, majf=0, minf=30 00:28:21.286 IO depths : 1=3.9%, 2=8.9%, 4=21.6%, 8=56.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:28:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 issued rwts: total=5672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.286 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.286 filename0: (groupid=0, jobs=1): err= 0: pid=259052: Mon Jul 15 17:10:26 2024 00:28:21.286 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10003msec) 00:28:21.286 slat (usec): min=6, max=111, avg=35.53, stdev=20.37 00:28:21.286 clat (usec): min=8531, max=54214, avg=27607.07, stdev=2775.69 00:28:21.286 lat (usec): min=8558, max=54235, avg=27642.60, stdev=2776.40 00:28:21.286 clat percentiles (usec): 00:28:21.286 | 1.00th=[17171], 5.00th=[26608], 10.00th=[27132], 20.00th=[27132], 00:28:21.286 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.286 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28705], 00:28:21.286 | 99.00th=[34866], 99.50th=[47973], 99.90th=[54264], 99.95th=[54264], 00:28:21.286 | 99.99th=[54264] 00:28:21.286 bw ( KiB/s): min= 2048, max= 2328, per=4.12%, avg=2272.00, stdev=69.49, samples=19 00:28:21.286 iops : min= 512, max= 582, avg=568.00, stdev=17.37, samples=19 00:28:21.286 lat (msec) : 10=0.10%, 20=1.34%, 50=98.27%, 100=0.28% 00:28:21.286 cpu : usr=98.98%, sys=0.62%, ctx=15, majf=0, minf=37 00:28:21.286 IO depths : 1=5.1%, 2=10.5%, 4=22.1%, 8=54.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:28:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 complete : 0=0.0%, 4=93.4%, 8=1.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 issued rwts: total=5730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.286 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.286 filename0: (groupid=0, jobs=1): err= 0: pid=259053: Mon Jul 15 17:10:26 2024 00:28:21.286 read: IOPS=573, BW=2295KiB/s (2350kB/s)(22.4MiB/10007msec) 00:28:21.286 slat (usec): min=5, max=122, avg=39.04, stdev=21.08 00:28:21.286 clat (usec): min=12098, max=66673, avg=27571.75, stdev=3259.26 00:28:21.286 lat (usec): min=12157, max=66688, avg=27610.79, stdev=3261.06 00:28:21.286 clat percentiles (usec): 00:28:21.286 | 1.00th=[15533], 5.00th=[26346], 10.00th=[27132], 20.00th=[27132], 00:28:21.286 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.286 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28705], 00:28:21.286 | 99.00th=[40109], 99.50th=[45351], 99.90th=[57934], 99.95th=[66323], 00:28:21.286 | 99.99th=[66847] 00:28:21.286 bw ( KiB/s): min= 2048, max= 2544, per=4.16%, avg=2290.40, stdev=88.60, samples=20 00:28:21.286 iops : min= 512, max= 636, avg=572.60, stdev=22.15, samples=20 00:28:21.286 lat (msec) : 20=2.77%, 50=96.95%, 100=0.28% 00:28:21.286 cpu : usr=98.64%, sys=0.95%, ctx=15, majf=0, minf=34 00:28:21.286 IO depths : 1=3.6%, 2=8.8%, 4=22.3%, 8=55.9%, 16=9.4%, 32=0.0%, >=64=0.0% 00:28:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 issued rwts: total=5742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.286 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.286 filename0: (groupid=0, jobs=1): err= 0: pid=259054: Mon Jul 15 17:10:26 2024 00:28:21.286 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:28:21.286 slat (nsec): min=7024, max=98322, avg=42050.12, stdev=16232.80 00:28:21.286 clat (usec): min=13406, max=43096, avg=27587.36, stdev=1075.68 00:28:21.286 lat (usec): min=13414, max=43108, avg=27629.41, stdev=1075.81 00:28:21.286 clat percentiles (usec): 00:28:21.286 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:28:21.286 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.286 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.286 | 99.00th=[29754], 99.50th=[33162], 99.90th=[37487], 99.95th=[40633], 00:28:21.286 | 99.99th=[43254] 00:28:21.286 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:28:21.286 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:28:21.286 lat (msec) : 20=0.35%, 50=99.65% 00:28:21.286 cpu : usr=97.57%, sys=1.41%, ctx=61, majf=0, minf=33 00:28:21.286 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.286 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.286 filename1: (groupid=0, jobs=1): err= 0: pid=259055: Mon Jul 15 17:10:26 2024 00:28:21.286 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10010msec) 00:28:21.286 slat (usec): min=7, max=129, avg=37.59, stdev=21.07 00:28:21.286 clat (usec): min=17448, max=36148, avg=27681.80, stdev=869.89 00:28:21.286 lat (usec): min=17483, max=36193, avg=27719.39, stdev=866.24 00:28:21.286 clat percentiles (usec): 00:28:21.286 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:21.286 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.286 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.286 | 99.00th=[30016], 99.50th=[30802], 99.90th=[35914], 99.95th=[35914], 00:28:21.286 | 99.99th=[35914] 00:28:21.286 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:28:21.286 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:28:21.286 lat (msec) : 20=0.28%, 50=99.72% 00:28:21.286 cpu : usr=98.61%, sys=0.99%, ctx=18, majf=0, minf=32 00:28:21.286 IO depths : 1=6.0%, 2=12.0%, 4=24.5%, 8=51.0%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.286 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.286 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.286 filename1: (groupid=0, jobs=1): err= 0: pid=259056: Mon Jul 15 17:10:26 2024 00:28:21.286 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:28:21.286 slat (usec): min=7, max=119, avg=47.28, stdev=18.24 00:28:21.286 clat (usec): min=17375, max=36608, avg=27563.34, stdev=862.44 00:28:21.286 lat (usec): min=17383, max=36634, avg=27610.61, stdev=860.87 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:28:21.287 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.287 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.287 | 99.00th=[29230], 99.50th=[30016], 99.90th=[36439], 99.95th=[36439], 00:28:21.287 | 99.99th=[36439] 00:28:21.287 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:28:21.287 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:28:21.287 lat (msec) : 20=0.28%, 50=99.72% 00:28:21.287 cpu : usr=98.81%, sys=0.78%, ctx=14, majf=0, minf=36 00:28:21.287 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename1: (groupid=0, jobs=1): err= 0: pid=259057: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:28:21.287 slat (usec): min=7, max=110, avg=47.69, stdev=18.65 00:28:21.287 clat (usec): min=17393, max=36477, avg=27534.52, stdev=857.65 00:28:21.287 lat (usec): min=17411, max=36500, avg=27582.21, stdev=857.47 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[26346], 5.00th=[26870], 10.00th=[26870], 20.00th=[27132], 00:28:21.287 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:28:21.287 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.287 | 99.00th=[29230], 99.50th=[30016], 99.90th=[36439], 99.95th=[36439], 00:28:21.287 | 99.99th=[36439] 00:28:21.287 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:28:21.287 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:28:21.287 lat (msec) : 20=0.28%, 50=99.72% 00:28:21.287 cpu : usr=98.96%, sys=0.63%, ctx=11, majf=0, minf=35 00:28:21.287 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename1: (groupid=0, jobs=1): err= 0: pid=259058: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=575, BW=2301KiB/s (2356kB/s)(22.5MiB/10011msec) 00:28:21.287 slat (usec): min=6, max=107, avg=38.35, stdev=20.83 00:28:21.287 clat (usec): min=13231, max=45792, avg=27474.31, stdev=1817.94 00:28:21.287 lat (usec): min=13238, max=45813, avg=27512.66, stdev=1819.64 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[18744], 5.00th=[26608], 10.00th=[26870], 20.00th=[27132], 00:28:21.287 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.287 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.287 | 99.00th=[32375], 99.50th=[35914], 99.90th=[45876], 99.95th=[45876], 00:28:21.287 | 99.99th=[45876] 00:28:21.287 bw ( KiB/s): min= 2176, max= 2352, per=4.17%, avg=2296.80, stdev=43.89, samples=20 00:28:21.287 iops : min= 544, max= 588, avg=574.20, stdev=10.97, samples=20 00:28:21.287 lat (msec) : 20=1.29%, 50=98.71% 00:28:21.287 cpu : usr=98.88%, sys=0.73%, ctx=9, majf=0, minf=38 00:28:21.287 IO depths : 1=5.7%, 2=11.5%, 4=23.8%, 8=52.1%, 16=6.9%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename1: (groupid=0, jobs=1): err= 0: pid=259059: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=592, BW=2371KiB/s (2428kB/s)(23.2MiB/10015msec) 00:28:21.287 slat (usec): min=4, max=160, avg=35.86, stdev=19.90 00:28:21.287 clat (usec): min=2284, max=46703, avg=26706.79, stdev=4614.32 00:28:21.287 lat (usec): min=2290, max=46763, avg=26742.65, stdev=4619.93 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[ 2704], 5.00th=[17957], 10.00th=[26608], 20.00th=[27132], 00:28:21.287 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.287 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28705], 00:28:21.287 | 99.00th=[36963], 99.50th=[41157], 99.90th=[46400], 99.95th=[46400], 00:28:21.287 | 99.99th=[46924] 00:28:21.287 bw ( KiB/s): min= 2176, max= 3248, per=4.30%, avg=2368.00, stdev=250.14, samples=20 00:28:21.287 iops : min= 544, max= 812, avg=592.00, stdev=62.54, samples=20 00:28:21.287 lat (msec) : 4=1.62%, 10=0.81%, 20=3.91%, 50=93.67% 00:28:21.287 cpu : usr=97.49%, sys=1.43%, ctx=108, majf=0, minf=38 00:28:21.287 IO depths : 1=5.1%, 2=10.5%, 4=22.5%, 8=54.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename1: (groupid=0, jobs=1): err= 0: pid=259060: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:28:21.287 slat (usec): min=11, max=112, avg=44.21, stdev=18.88 00:28:21.287 clat (usec): min=17400, max=36258, avg=27617.80, stdev=845.58 00:28:21.287 lat (usec): min=17431, max=36302, avg=27662.00, stdev=842.39 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:28:21.287 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.287 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.287 | 99.00th=[29230], 99.50th=[30278], 99.90th=[35914], 99.95th=[36439], 00:28:21.287 | 99.99th=[36439] 00:28:21.287 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:28:21.287 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:28:21.287 lat (msec) : 20=0.28%, 50=99.72% 00:28:21.287 cpu : usr=98.14%, sys=1.41%, ctx=81, majf=0, minf=43 00:28:21.287 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename1: (groupid=0, jobs=1): err= 0: pid=259061: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:28:21.287 slat (nsec): min=7048, max=92976, avg=34998.92, stdev=21407.28 00:28:21.287 clat (usec): min=18376, max=34562, avg=27683.90, stdev=871.07 00:28:21.287 lat (usec): min=18384, max=34585, avg=27718.90, stdev=866.89 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[25560], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:21.287 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.287 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28443], 95.00th=[28705], 00:28:21.287 | 99.00th=[30278], 99.50th=[31851], 99.90th=[34341], 99.95th=[34341], 00:28:21.287 | 99.99th=[34341] 00:28:21.287 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:28:21.287 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:28:21.287 lat (msec) : 20=0.28%, 50=99.72% 00:28:21.287 cpu : usr=98.87%, sys=0.73%, ctx=10, majf=0, minf=69 00:28:21.287 IO depths : 1=5.8%, 2=11.9%, 4=24.7%, 8=50.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename1: (groupid=0, jobs=1): err= 0: pid=259062: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=570, BW=2282KiB/s (2337kB/s)(22.3MiB/10006msec) 00:28:21.287 slat (usec): min=5, max=110, avg=40.15, stdev=19.10 00:28:21.287 clat (usec): min=12093, max=57882, avg=27706.58, stdev=2002.15 00:28:21.287 lat (usec): min=12110, max=57898, avg=27746.73, stdev=2000.14 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[25822], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:21.287 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.287 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.287 | 99.00th=[30540], 99.50th=[36963], 99.90th=[57934], 99.95th=[57934], 00:28:21.287 | 99.99th=[57934] 00:28:21.287 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2276.80, stdev=66.70, samples=20 00:28:21.287 iops : min= 512, max= 576, avg=569.20, stdev=16.68, samples=20 00:28:21.287 lat (msec) : 20=0.39%, 50=99.33%, 100=0.28% 00:28:21.287 cpu : usr=98.76%, sys=0.85%, ctx=12, majf=0, minf=31 00:28:21.287 IO depths : 1=5.8%, 2=11.8%, 4=24.3%, 8=51.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename2: (groupid=0, jobs=1): err= 0: pid=259063: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=589, BW=2359KiB/s (2416kB/s)(23.1MiB/10007msec) 00:28:21.287 slat (nsec): min=5045, max=78372, avg=14067.58, stdev=8901.34 00:28:21.287 clat (usec): min=7223, max=67442, avg=27054.94, stdev=5050.23 00:28:21.287 lat (usec): min=7249, max=67455, avg=27069.01, stdev=5050.55 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[14353], 5.00th=[16319], 10.00th=[20841], 20.00th=[26084], 00:28:21.287 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:28:21.287 | 70.00th=[27919], 80.00th=[28181], 90.00th=[30540], 95.00th=[34866], 00:28:21.287 | 99.00th=[41157], 99.50th=[45876], 99.90th=[57934], 99.95th=[58459], 00:28:21.287 | 99.99th=[67634] 00:28:21.287 bw ( KiB/s): min= 2144, max= 2592, per=4.27%, avg=2354.40, stdev=108.43, samples=20 00:28:21.287 iops : min= 536, max= 648, avg=588.60, stdev=27.11, samples=20 00:28:21.287 lat (msec) : 10=0.14%, 20=7.91%, 50=91.68%, 100=0.27% 00:28:21.287 cpu : usr=98.85%, sys=0.76%, ctx=6, majf=0, minf=33 00:28:21.287 IO depths : 1=0.9%, 2=2.2%, 4=7.3%, 8=75.3%, 16=14.4%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=89.9%, 8=7.2%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename2: (groupid=0, jobs=1): err= 0: pid=259064: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=571, BW=2285KiB/s (2340kB/s)(22.3MiB/10003msec) 00:28:21.287 slat (usec): min=6, max=130, avg=37.03, stdev=21.14 00:28:21.287 clat (usec): min=7504, max=63958, avg=27675.27, stdev=2593.14 00:28:21.287 lat (usec): min=7510, max=63975, avg=27712.30, stdev=2593.10 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[16319], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:28:21.287 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.287 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28705], 00:28:21.287 | 99.00th=[36963], 99.50th=[46400], 99.90th=[54264], 99.95th=[54264], 00:28:21.287 | 99.99th=[63701] 00:28:21.287 bw ( KiB/s): min= 2096, max= 2336, per=4.13%, avg=2277.89, stdev=61.09, samples=19 00:28:21.287 iops : min= 524, max= 584, avg=569.47, stdev=15.27, samples=19 00:28:21.287 lat (msec) : 10=0.04%, 20=1.19%, 50=98.49%, 100=0.28% 00:28:21.287 cpu : usr=98.91%, sys=0.69%, ctx=16, majf=0, minf=37 00:28:21.287 IO depths : 1=4.1%, 2=9.9%, 4=24.1%, 8=53.4%, 16=8.5%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename2: (groupid=0, jobs=1): err= 0: pid=259065: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=573, BW=2294KiB/s (2349kB/s)(22.4MiB/10014msec) 00:28:21.287 slat (nsec): min=6720, max=90174, avg=19757.75, stdev=16021.06 00:28:21.287 clat (usec): min=11377, max=41543, avg=27739.58, stdev=1995.23 00:28:21.287 lat (usec): min=11387, max=41581, avg=27759.34, stdev=1994.60 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[16057], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:21.287 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:28:21.287 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:28:21.287 | 99.00th=[36439], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:28:21.287 | 99.99th=[41681] 00:28:21.287 bw ( KiB/s): min= 2176, max= 2432, per=4.16%, avg=2291.20, stdev=57.48, samples=20 00:28:21.287 iops : min= 544, max= 608, avg=572.80, stdev=14.37, samples=20 00:28:21.287 lat (msec) : 20=1.36%, 50=98.64% 00:28:21.287 cpu : usr=98.32%, sys=1.06%, ctx=63, majf=0, minf=30 00:28:21.287 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename2: (groupid=0, jobs=1): err= 0: pid=259066: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=571, BW=2288KiB/s (2343kB/s)(22.4MiB/10014msec) 00:28:21.287 slat (usec): min=7, max=101, avg=41.37, stdev=16.50 00:28:21.287 clat (usec): min=13306, max=49694, avg=27618.43, stdev=1259.23 00:28:21.287 lat (usec): min=13315, max=49743, avg=27659.81, stdev=1258.88 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:21.287 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.287 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.287 | 99.00th=[29230], 99.50th=[35914], 99.90th=[39060], 99.95th=[47449], 00:28:21.287 | 99.99th=[49546] 00:28:21.287 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:28:21.287 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:28:21.287 lat (msec) : 20=0.52%, 50=99.48% 00:28:21.287 cpu : usr=97.83%, sys=1.18%, ctx=103, majf=0, minf=31 00:28:21.287 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename2: (groupid=0, jobs=1): err= 0: pid=259067: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:28:21.287 slat (usec): min=7, max=111, avg=46.58, stdev=18.32 00:28:21.287 clat (usec): min=17376, max=37060, avg=27571.20, stdev=928.52 00:28:21.287 lat (usec): min=17395, max=37083, avg=27617.79, stdev=927.10 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:28:21.287 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.287 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.287 | 99.00th=[29492], 99.50th=[30278], 99.90th=[36439], 99.95th=[36963], 00:28:21.287 | 99.99th=[36963] 00:28:21.287 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:28:21.287 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:28:21.287 lat (msec) : 20=0.31%, 50=99.69% 00:28:21.287 cpu : usr=98.76%, sys=0.84%, ctx=17, majf=0, minf=30 00:28:21.287 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename2: (groupid=0, jobs=1): err= 0: pid=259068: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=576, BW=2307KiB/s (2363kB/s)(22.5MiB/10006msec) 00:28:21.287 slat (nsec): min=5932, max=91396, avg=29072.98, stdev=18646.89 00:28:21.287 clat (usec): min=7199, max=57553, avg=27497.97, stdev=3437.75 00:28:21.287 lat (usec): min=7210, max=57569, avg=27527.04, stdev=3437.88 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[14877], 5.00th=[22152], 10.00th=[26608], 20.00th=[27132], 00:28:21.287 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.287 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28443], 95.00th=[29492], 00:28:21.287 | 99.00th=[39584], 99.50th=[42206], 99.90th=[57410], 99.95th=[57410], 00:28:21.287 | 99.99th=[57410] 00:28:21.287 bw ( KiB/s): min= 2048, max= 2480, per=4.18%, avg=2302.40, stdev=91.97, samples=20 00:28:21.287 iops : min= 512, max= 620, avg=575.60, stdev=22.99, samples=20 00:28:21.287 lat (msec) : 10=0.10%, 20=2.79%, 50=96.73%, 100=0.38% 00:28:21.287 cpu : usr=98.79%, sys=0.78%, ctx=15, majf=0, minf=30 00:28:21.287 IO depths : 1=4.4%, 2=9.0%, 4=19.6%, 8=58.1%, 16=8.9%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=92.8%, 8=2.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename2: (groupid=0, jobs=1): err= 0: pid=259070: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=571, BW=2285KiB/s (2340kB/s)(22.3MiB/10009msec) 00:28:21.287 slat (usec): min=6, max=113, avg=37.04, stdev=20.88 00:28:21.287 clat (usec): min=11626, max=64409, avg=27708.70, stdev=2363.15 00:28:21.287 lat (usec): min=11641, max=64440, avg=27745.74, stdev=2361.79 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[23725], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:21.287 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.287 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.287 | 99.00th=[30278], 99.50th=[41157], 99.90th=[61604], 99.95th=[61604], 00:28:21.287 | 99.99th=[64226] 00:28:21.287 bw ( KiB/s): min= 2096, max= 2304, per=4.14%, avg=2280.80, stdev=58.61, samples=20 00:28:21.287 iops : min= 524, max= 576, avg=570.20, stdev=14.65, samples=20 00:28:21.287 lat (msec) : 20=0.70%, 50=99.02%, 100=0.28% 00:28:21.287 cpu : usr=98.97%, sys=0.64%, ctx=15, majf=0, minf=38 00:28:21.287 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 filename2: (groupid=0, jobs=1): err= 0: pid=259071: Mon Jul 15 17:10:26 2024 00:28:21.287 read: IOPS=566, BW=2267KiB/s (2322kB/s)(22.1MiB/10002msec) 00:28:21.287 slat (usec): min=6, max=109, avg=42.48, stdev=19.62 00:28:21.287 clat (usec): min=10730, max=81516, avg=27850.91, stdev=3426.89 00:28:21.287 lat (usec): min=10743, max=81534, avg=27893.40, stdev=3425.31 00:28:21.287 clat percentiles (usec): 00:28:21.287 | 1.00th=[20055], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:28:21.287 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.287 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28705], 00:28:21.287 | 99.00th=[41681], 99.50th=[45876], 99.90th=[81265], 99.95th=[81265], 00:28:21.287 | 99.99th=[81265] 00:28:21.287 bw ( KiB/s): min= 1920, max= 2304, per=4.10%, avg=2258.95, stdev=97.30, samples=19 00:28:21.287 iops : min= 480, max= 576, avg=564.74, stdev=24.32, samples=19 00:28:21.287 lat (msec) : 20=1.02%, 50=98.69%, 100=0.28% 00:28:21.287 cpu : usr=98.71%, sys=0.88%, ctx=17, majf=0, minf=47 00:28:21.287 IO depths : 1=5.1%, 2=10.6%, 4=22.6%, 8=53.8%, 16=7.8%, 32=0.0%, >=64=0.0% 00:28:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.287 issued rwts: total=5669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.287 00:28:21.287 Run status group 0 (all jobs): 00:28:21.287 READ: bw=53.8MiB/s (56.4MB/s), 2267KiB/s-2371KiB/s (2322kB/s-2428kB/s), io=539MiB (565MB), run=10002-10015msec 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 bdev_null0 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 [2024-07-15 17:10:26.575831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 bdev_null1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.288 { 00:28:21.288 "params": { 00:28:21.288 "name": "Nvme$subsystem", 00:28:21.288 "trtype": "$TEST_TRANSPORT", 00:28:21.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.288 "adrfam": "ipv4", 00:28:21.288 "trsvcid": "$NVMF_PORT", 00:28:21.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.288 "hdgst": ${hdgst:-false}, 00:28:21.288 "ddgst": ${ddgst:-false} 00:28:21.288 }, 00:28:21.288 "method": "bdev_nvme_attach_controller" 00:28:21.288 } 00:28:21.288 EOF 00:28:21.288 )") 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.288 { 00:28:21.288 "params": { 00:28:21.288 "name": "Nvme$subsystem", 00:28:21.288 "trtype": "$TEST_TRANSPORT", 00:28:21.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.288 "adrfam": "ipv4", 00:28:21.288 "trsvcid": "$NVMF_PORT", 00:28:21.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.288 "hdgst": ${hdgst:-false}, 00:28:21.288 "ddgst": ${ddgst:-false} 00:28:21.288 }, 00:28:21.288 "method": "bdev_nvme_attach_controller" 00:28:21.288 } 00:28:21.288 EOF 00:28:21.288 )") 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:21.288 "params": { 00:28:21.288 "name": "Nvme0", 00:28:21.288 "trtype": "tcp", 00:28:21.288 "traddr": "10.0.0.2", 00:28:21.288 "adrfam": "ipv4", 00:28:21.288 "trsvcid": "4420", 00:28:21.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:21.288 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:21.288 "hdgst": false, 00:28:21.288 "ddgst": false 00:28:21.288 }, 00:28:21.288 "method": "bdev_nvme_attach_controller" 00:28:21.288 },{ 00:28:21.288 "params": { 00:28:21.288 "name": "Nvme1", 00:28:21.288 "trtype": "tcp", 00:28:21.288 "traddr": "10.0.0.2", 00:28:21.288 "adrfam": "ipv4", 00:28:21.288 "trsvcid": "4420", 00:28:21.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:21.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:21.288 "hdgst": false, 00:28:21.288 "ddgst": false 00:28:21.288 }, 00:28:21.288 "method": "bdev_nvme_attach_controller" 00:28:21.288 }' 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:21.288 17:10:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.288 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:21.288 ... 00:28:21.288 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:21.288 ... 00:28:21.288 fio-3.35 00:28:21.288 Starting 4 threads 00:28:21.288 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.621 00:28:26.621 filename0: (groupid=0, jobs=1): err= 0: pid=260946: Mon Jul 15 17:10:32 2024 00:28:26.621 read: IOPS=2630, BW=20.6MiB/s (21.5MB/s)(103MiB/5002msec) 00:28:26.621 slat (nsec): min=6080, max=74357, avg=12696.43, stdev=8625.60 00:28:26.621 clat (usec): min=692, max=5488, avg=3002.90, stdev=502.69 00:28:26.621 lat (usec): min=705, max=5518, avg=3015.60, stdev=502.80 00:28:26.621 clat percentiles (usec): 00:28:26.621 | 1.00th=[ 1795], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2704], 00:28:26.621 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 3032], 00:28:26.621 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3589], 95.00th=[ 3982], 00:28:26.621 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 5276], 99.95th=[ 5342], 00:28:26.621 | 99.99th=[ 5407] 00:28:26.621 bw ( KiB/s): min=20176, max=21712, per=24.94%, avg=20974.22, stdev=480.50, samples=9 00:28:26.621 iops : min= 2522, max= 2714, avg=2621.78, stdev=60.06, samples=9 00:28:26.621 lat (usec) : 750=0.02%, 1000=0.04% 00:28:26.621 lat (msec) : 2=1.72%, 4=93.33%, 10=4.89% 00:28:26.621 cpu : usr=96.88%, sys=2.70%, ctx=33, majf=0, minf=63 00:28:26.621 IO depths : 1=0.1%, 2=4.9%, 4=66.9%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:26.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.621 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.621 issued rwts: total=13158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.621 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:26.621 filename0: (groupid=0, jobs=1): err= 0: pid=260948: Mon Jul 15 17:10:32 2024 00:28:26.621 read: IOPS=2710, BW=21.2MiB/s (22.2MB/s)(106MiB/5001msec) 00:28:26.621 slat (nsec): min=5952, max=74529, avg=11610.66, stdev=7277.77 00:28:26.621 clat (usec): min=781, max=5674, avg=2918.31, stdev=418.09 00:28:26.621 lat (usec): min=792, max=5687, avg=2929.92, stdev=418.27 00:28:26.621 clat percentiles (usec): 00:28:26.621 | 1.00th=[ 1860], 5.00th=[ 2212], 10.00th=[ 2442], 20.00th=[ 2671], 00:28:26.621 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2999], 00:28:26.621 | 70.00th=[ 3032], 80.00th=[ 3163], 90.00th=[ 3294], 95.00th=[ 3589], 00:28:26.621 | 99.00th=[ 4293], 99.50th=[ 4424], 99.90th=[ 5211], 99.95th=[ 5604], 00:28:26.621 | 99.99th=[ 5669] 00:28:26.621 bw ( KiB/s): min=20992, max=22928, per=25.82%, avg=21711.44, stdev=561.05, samples=9 00:28:26.621 iops : min= 2624, max= 2866, avg=2713.89, stdev=70.11, samples=9 00:28:26.621 lat (usec) : 1000=0.01% 00:28:26.621 lat (msec) : 2=1.84%, 4=96.22%, 10=1.94% 00:28:26.621 cpu : usr=97.86%, sys=1.80%, ctx=6, majf=0, minf=88 00:28:26.621 IO depths : 1=0.1%, 2=3.3%, 4=68.1%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:26.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.621 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.621 issued rwts: total=13554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.621 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:26.621 filename1: (groupid=0, jobs=1): err= 0: pid=260949: Mon Jul 15 17:10:32 2024 00:28:26.621 read: IOPS=2621, BW=20.5MiB/s (21.5MB/s)(102MiB/5002msec) 00:28:26.621 slat (nsec): min=5961, max=74606, avg=11749.64, stdev=7423.25 00:28:26.621 clat (usec): min=833, max=5389, avg=3016.75, stdev=514.61 00:28:26.621 lat (usec): min=839, max=5418, avg=3028.50, stdev=514.53 00:28:26.621 clat percentiles (usec): 00:28:26.621 | 1.00th=[ 1958], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2704], 00:28:26.621 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 2999], 00:28:26.621 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3621], 95.00th=[ 4146], 00:28:26.621 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5211], 00:28:26.621 | 99.99th=[ 5407] 00:28:26.621 bw ( KiB/s): min=20352, max=22528, per=24.99%, avg=21016.89, stdev=749.96, samples=9 00:28:26.621 iops : min= 2544, max= 2816, avg=2627.11, stdev=93.74, samples=9 00:28:26.621 lat (usec) : 1000=0.02% 00:28:26.621 lat (msec) : 2=1.37%, 4=92.08%, 10=6.54% 00:28:26.621 cpu : usr=94.56%, sys=3.78%, ctx=44, majf=0, minf=146 00:28:26.621 IO depths : 1=0.2%, 2=4.9%, 4=67.2%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:26.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.621 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.621 issued rwts: total=13112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.621 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:26.621 filename1: (groupid=0, jobs=1): err= 0: pid=260950: Mon Jul 15 17:10:32 2024 00:28:26.621 read: IOPS=2549, BW=19.9MiB/s (20.9MB/s)(99.6MiB/5001msec) 00:28:26.621 slat (nsec): min=6111, max=56540, avg=13525.99, stdev=8523.52 00:28:26.621 clat (usec): min=662, max=5662, avg=3095.26, stdev=490.51 00:28:26.621 lat (usec): min=670, max=5689, avg=3108.78, stdev=490.48 00:28:26.621 clat percentiles (usec): 00:28:26.621 | 1.00th=[ 2040], 5.00th=[ 2442], 10.00th=[ 2638], 20.00th=[ 2769], 00:28:26.621 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:28:26.621 | 70.00th=[ 3195], 80.00th=[ 3359], 90.00th=[ 3720], 95.00th=[ 4080], 00:28:26.621 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5407], 99.95th=[ 5538], 00:28:26.621 | 99.99th=[ 5669] 00:28:26.621 bw ( KiB/s): min=19584, max=20784, per=24.24%, avg=20380.44, stdev=366.06, samples=9 00:28:26.621 iops : min= 2448, max= 2598, avg=2547.56, stdev=45.76, samples=9 00:28:26.621 lat (usec) : 750=0.01%, 1000=0.04% 00:28:26.621 lat (msec) : 2=0.80%, 4=93.32%, 10=5.83% 00:28:26.621 cpu : usr=97.08%, sys=2.48%, ctx=6, majf=0, minf=71 00:28:26.621 IO depths : 1=0.1%, 2=7.2%, 4=63.8%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:26.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.621 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.621 issued rwts: total=12751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.621 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:26.621 00:28:26.621 Run status group 0 (all jobs): 00:28:26.621 READ: bw=82.1MiB/s (86.1MB/s), 19.9MiB/s-21.2MiB/s (20.9MB/s-22.2MB/s), io=411MiB (431MB), run=5001-5002msec 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.621 00:28:26.621 real 0m24.148s 00:28:26.621 user 4m51.575s 00:28:26.621 sys 0m4.352s 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:26.621 17:10:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.621 ************************************ 00:28:26.621 END TEST fio_dif_rand_params 00:28:26.621 ************************************ 00:28:26.621 17:10:32 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:26.621 17:10:32 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:26.621 17:10:32 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:26.621 17:10:32 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.621 17:10:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:26.621 ************************************ 00:28:26.621 START TEST fio_dif_digest 00:28:26.621 ************************************ 00:28:26.621 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:28:26.621 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:26.621 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:26.621 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:26.621 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:26.621 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:26.621 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:26.621 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:26.621 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:26.621 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:26.622 bdev_null0 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:26.622 [2024-07-15 17:10:32.899463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.622 { 00:28:26.622 "params": { 00:28:26.622 "name": "Nvme$subsystem", 00:28:26.622 "trtype": "$TEST_TRANSPORT", 00:28:26.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.622 "adrfam": "ipv4", 00:28:26.622 "trsvcid": "$NVMF_PORT", 00:28:26.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.622 "hdgst": ${hdgst:-false}, 00:28:26.622 "ddgst": ${ddgst:-false} 00:28:26.622 }, 00:28:26.622 "method": "bdev_nvme_attach_controller" 00:28:26.622 } 00:28:26.622 EOF 00:28:26.622 )") 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:26.622 "params": { 00:28:26.622 "name": "Nvme0", 00:28:26.622 "trtype": "tcp", 00:28:26.622 "traddr": "10.0.0.2", 00:28:26.622 "adrfam": "ipv4", 00:28:26.622 "trsvcid": "4420", 00:28:26.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:26.622 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:26.622 "hdgst": true, 00:28:26.622 "ddgst": true 00:28:26.622 }, 00:28:26.622 "method": "bdev_nvme_attach_controller" 00:28:26.622 }' 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:26.622 17:10:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.622 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:26.622 ... 00:28:26.622 fio-3.35 00:28:26.622 Starting 3 threads 00:28:26.895 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.102 00:28:39.102 filename0: (groupid=0, jobs=1): err= 0: pid=262178: Mon Jul 15 17:10:43 2024 00:28:39.102 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(329MiB/10045msec) 00:28:39.102 slat (nsec): min=6559, max=29583, avg=11528.55, stdev=2069.45 00:28:39.102 clat (usec): min=6465, max=53659, avg=11427.90, stdev=2823.09 00:28:39.102 lat (usec): min=6479, max=53671, avg=11439.43, stdev=2823.08 00:28:39.102 clat percentiles (usec): 00:28:39.102 | 1.00th=[ 7635], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10683], 00:28:39.102 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:28:39.102 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:28:39.102 | 99.00th=[13435], 99.50th=[14091], 99.90th=[53216], 99.95th=[53740], 00:28:39.102 | 99.99th=[53740] 00:28:39.102 bw ( KiB/s): min=30464, max=37632, per=32.30%, avg=33638.40, stdev=1527.89, samples=20 00:28:39.102 iops : min= 238, max= 294, avg=262.80, stdev=11.94, samples=20 00:28:39.102 lat (msec) : 10=6.96%, 20=92.62%, 50=0.04%, 100=0.38% 00:28:39.102 cpu : usr=94.56%, sys=5.12%, ctx=22, majf=0, minf=71 00:28:39.102 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:39.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.102 issued rwts: total=2630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:39.102 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:39.102 filename0: (groupid=0, jobs=1): err= 0: pid=262179: Mon Jul 15 17:10:43 2024 00:28:39.102 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(370MiB/10044msec) 00:28:39.102 slat (nsec): min=6578, max=49772, avg=11566.45, stdev=2224.88 00:28:39.102 clat (usec): min=6568, max=52266, avg=10158.66, stdev=1836.98 00:28:39.102 lat (usec): min=6579, max=52291, avg=10170.22, stdev=1837.07 00:28:39.102 clat percentiles (usec): 00:28:39.102 | 1.00th=[ 7504], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9503], 00:28:39.102 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:28:39.102 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:28:39.102 | 99.00th=[12125], 99.50th=[12387], 99.90th=[52167], 99.95th=[52167], 00:28:39.102 | 99.99th=[52167] 00:28:39.102 bw ( KiB/s): min=34629, max=39424, per=36.34%, avg=37840.25, stdev=1073.93, samples=20 00:28:39.102 iops : min= 270, max= 308, avg=295.60, stdev= 8.48, samples=20 00:28:39.102 lat (msec) : 10=42.29%, 20=57.54%, 50=0.07%, 100=0.10% 00:28:39.102 cpu : usr=94.76%, sys=4.93%, ctx=29, majf=0, minf=143 00:28:39.102 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:39.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.103 issued rwts: total=2958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:39.103 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:39.103 filename0: (groupid=0, jobs=1): err= 0: pid=262180: Mon Jul 15 17:10:43 2024 00:28:39.103 read: IOPS=257, BW=32.2MiB/s (33.7MB/s)(323MiB/10044msec) 00:28:39.103 slat (nsec): min=6532, max=37504, avg=11462.30, stdev=2228.18 00:28:39.103 clat (usec): min=5670, max=53715, avg=11630.73, stdev=2793.67 00:28:39.103 lat (usec): min=5678, max=53725, avg=11642.19, stdev=2793.65 00:28:39.103 clat percentiles (usec): 00:28:39.103 | 1.00th=[ 7570], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:28:39.103 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:28:39.103 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:28:39.103 | 99.00th=[13960], 99.50th=[14484], 99.90th=[53216], 99.95th=[53740], 00:28:39.103 | 99.99th=[53740] 00:28:39.103 bw ( KiB/s): min=26368, max=35072, per=31.74%, avg=33049.60, stdev=1785.06, samples=20 00:28:39.103 iops : min= 206, max= 274, avg=258.20, stdev=13.95, samples=20 00:28:39.103 lat (msec) : 10=4.76%, 20=94.81%, 50=0.08%, 100=0.35% 00:28:39.103 cpu : usr=94.88%, sys=4.81%, ctx=19, majf=0, minf=129 00:28:39.103 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:39.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.103 issued rwts: total=2584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:39.103 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:39.103 00:28:39.103 Run status group 0 (all jobs): 00:28:39.103 READ: bw=102MiB/s (107MB/s), 32.2MiB/s-36.8MiB/s (33.7MB/s-38.6MB/s), io=1022MiB (1071MB), run=10044-10045msec 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.103 00:28:39.103 real 0m11.103s 00:28:39.103 user 0m34.982s 00:28:39.103 sys 0m1.752s 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:39.103 17:10:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.103 ************************************ 00:28:39.103 END TEST fio_dif_digest 00:28:39.103 ************************************ 00:28:39.103 17:10:43 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:39.103 17:10:43 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:39.103 17:10:43 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:39.103 17:10:43 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:39.103 17:10:43 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:39.103 17:10:44 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:39.103 17:10:44 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:39.103 17:10:44 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:39.103 17:10:44 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:39.103 rmmod nvme_tcp 00:28:39.103 rmmod nvme_fabrics 00:28:39.103 rmmod nvme_keyring 00:28:39.103 17:10:44 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:39.103 17:10:44 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:39.103 17:10:44 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:39.103 17:10:44 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 253566 ']' 00:28:39.103 17:10:44 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 253566 00:28:39.103 17:10:44 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 253566 ']' 00:28:39.103 17:10:44 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 253566 00:28:39.103 17:10:44 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:28:39.103 17:10:44 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:39.103 17:10:44 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 253566 00:28:39.103 17:10:44 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:39.103 17:10:44 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:39.103 17:10:44 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 253566' 00:28:39.103 killing process with pid 253566 00:28:39.103 17:10:44 nvmf_dif -- common/autotest_common.sh@967 -- # kill 253566 00:28:39.103 17:10:44 nvmf_dif -- common/autotest_common.sh@972 -- # wait 253566 00:28:39.103 17:10:44 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:39.103 17:10:44 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:40.042 Waiting for block devices as requested 00:28:40.042 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:40.042 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:40.042 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:40.042 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:40.302 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:40.302 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:40.302 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:40.302 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:40.561 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:40.561 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:40.561 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:40.821 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:40.821 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:40.821 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:40.821 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:41.079 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:41.079 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:41.079 17:10:47 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:41.079 17:10:47 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:41.079 17:10:47 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:41.079 17:10:47 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:41.079 17:10:47 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.079 17:10:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:41.079 17:10:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.616 17:10:49 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:43.616 00:28:43.616 real 1m12.348s 00:28:43.616 user 7m8.239s 00:28:43.616 sys 0m17.996s 00:28:43.616 17:10:49 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:43.616 17:10:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:43.616 ************************************ 00:28:43.616 END TEST nvmf_dif 00:28:43.616 ************************************ 00:28:43.616 17:10:49 -- common/autotest_common.sh@1142 -- # return 0 00:28:43.616 17:10:49 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:43.616 17:10:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:43.616 17:10:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.616 17:10:49 -- common/autotest_common.sh@10 -- # set +x 00:28:43.616 ************************************ 00:28:43.616 START TEST nvmf_abort_qd_sizes 00:28:43.616 ************************************ 00:28:43.616 17:10:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:43.616 * Looking for test storage... 00:28:43.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:43.616 17:10:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.616 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:43.616 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.616 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:43.617 17:10:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:48.892 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:48.892 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:48.892 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:48.893 Found net devices under 0000:86:00.0: cvl_0_0 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:48.893 Found net devices under 0000:86:00.1: cvl_0_1 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:48.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:28:48.893 00:28:48.893 --- 10.0.0.2 ping statistics --- 00:28:48.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.893 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:28:48.893 00:28:48.893 --- 10.0.0.1 ping statistics --- 00:28:48.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.893 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:48.893 17:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:51.429 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:51.429 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:51.996 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=269878 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 269878 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 269878 ']' 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:52.256 17:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:52.256 [2024-07-15 17:10:58.850130] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:28:52.256 [2024-07-15 17:10:58.850174] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.256 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.256 [2024-07-15 17:10:58.906978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.514 [2024-07-15 17:10:58.989382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.514 [2024-07-15 17:10:58.989418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.514 [2024-07-15 17:10:58.989426] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.514 [2024-07-15 17:10:58.989432] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.514 [2024-07-15 17:10:58.989437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.514 [2024-07-15 17:10:58.989483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.514 [2024-07-15 17:10:58.989599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.514 [2024-07-15 17:10:58.989751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.514 [2024-07-15 17:10:58.989753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:53.080 17:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:53.080 ************************************ 00:28:53.080 START TEST spdk_target_abort 00:28:53.080 ************************************ 00:28:53.080 17:10:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:28:53.080 17:10:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:53.080 17:10:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:28:53.080 17:10:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.080 17:10:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.363 spdk_targetn1 00:28:56.363 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.363 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:56.363 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.363 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.363 [2024-07-15 17:11:02.580135] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.363 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.363 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:56.363 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.363 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.363 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.364 [2024-07-15 17:11:02.613065] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:56.364 17:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:56.364 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.649 Initializing NVMe Controllers 00:28:59.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:59.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:59.649 Initialization complete. Launching workers. 00:28:59.649 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14886, failed: 0 00:28:59.649 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1368, failed to submit 13518 00:28:59.649 success 751, unsuccess 617, failed 0 00:28:59.649 17:11:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:59.649 17:11:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:59.649 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.995 Initializing NVMe Controllers 00:29:02.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:02.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:02.995 Initialization complete. Launching workers. 00:29:02.995 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8831, failed: 0 00:29:02.995 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1252, failed to submit 7579 00:29:02.995 success 296, unsuccess 956, failed 0 00:29:02.995 17:11:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:02.995 17:11:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:02.995 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.285 Initializing NVMe Controllers 00:29:06.285 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:06.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:06.285 Initialization complete. Launching workers. 00:29:06.285 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38017, failed: 0 00:29:06.285 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2780, failed to submit 35237 00:29:06.285 success 586, unsuccess 2194, failed 0 00:29:06.285 17:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:06.285 17:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.285 17:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:06.285 17:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.285 17:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:06.285 17:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.285 17:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:06.850 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.851 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 269878 00:29:06.851 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 269878 ']' 00:29:06.851 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 269878 00:29:06.851 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:29:07.109 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:07.109 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 269878 00:29:07.109 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:07.109 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:07.109 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 269878' 00:29:07.109 killing process with pid 269878 00:29:07.109 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 269878 00:29:07.109 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 269878 00:29:07.109 00:29:07.109 real 0m14.002s 00:29:07.109 user 0m55.831s 00:29:07.109 sys 0m2.275s 00:29:07.109 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:07.109 17:11:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.109 ************************************ 00:29:07.109 END TEST spdk_target_abort 00:29:07.109 ************************************ 00:29:07.369 17:11:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:07.369 17:11:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:07.369 17:11:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:07.369 17:11:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.369 17:11:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:07.369 ************************************ 00:29:07.369 START TEST kernel_target_abort 00:29:07.369 ************************************ 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:07.369 17:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:09.906 Waiting for block devices as requested 00:29:09.906 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:09.906 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:09.906 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:09.906 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:09.906 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:09.906 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:09.906 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:09.906 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:10.165 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:10.165 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:10.165 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:10.165 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:10.424 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:10.424 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:10.424 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:10.683 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:10.683 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:10.683 No valid GPT data, bailing 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:10.683 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:10.941 00:29:10.941 Discovery Log Number of Records 2, Generation counter 2 00:29:10.941 =====Discovery Log Entry 0====== 00:29:10.941 trtype: tcp 00:29:10.941 adrfam: ipv4 00:29:10.941 subtype: current discovery subsystem 00:29:10.941 treq: not specified, sq flow control disable supported 00:29:10.941 portid: 1 00:29:10.941 trsvcid: 4420 00:29:10.941 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:10.941 traddr: 10.0.0.1 00:29:10.941 eflags: none 00:29:10.941 sectype: none 00:29:10.941 =====Discovery Log Entry 1====== 00:29:10.941 trtype: tcp 00:29:10.941 adrfam: ipv4 00:29:10.941 subtype: nvme subsystem 00:29:10.941 treq: not specified, sq flow control disable supported 00:29:10.941 portid: 1 00:29:10.941 trsvcid: 4420 00:29:10.941 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:10.941 traddr: 10.0.0.1 00:29:10.941 eflags: none 00:29:10.941 sectype: none 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:10.941 17:11:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:10.941 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.218 Initializing NVMe Controllers 00:29:14.218 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:14.218 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:14.218 Initialization complete. Launching workers. 00:29:14.218 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80705, failed: 0 00:29:14.218 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 80705, failed to submit 0 00:29:14.218 success 0, unsuccess 80705, failed 0 00:29:14.218 17:11:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:14.218 17:11:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:14.218 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.505 Initializing NVMe Controllers 00:29:17.505 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:17.505 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:17.505 Initialization complete. Launching workers. 00:29:17.505 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 134062, failed: 0 00:29:17.505 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33426, failed to submit 100636 00:29:17.505 success 0, unsuccess 33426, failed 0 00:29:17.505 17:11:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:17.505 17:11:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:17.505 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.040 Initializing NVMe Controllers 00:29:20.040 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:20.040 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:20.040 Initialization complete. Launching workers. 00:29:20.040 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 127559, failed: 0 00:29:20.040 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31898, failed to submit 95661 00:29:20.040 success 0, unsuccess 31898, failed 0 00:29:20.040 17:11:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:20.040 17:11:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:20.040 17:11:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:20.040 17:11:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:20.040 17:11:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:20.040 17:11:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:20.040 17:11:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:20.040 17:11:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:20.040 17:11:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:20.299 17:11:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:22.836 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:22.836 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:23.776 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:23.776 00:29:23.776 real 0m16.456s 00:29:23.776 user 0m7.918s 00:29:23.776 sys 0m4.623s 00:29:23.776 17:11:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:23.776 17:11:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:23.776 ************************************ 00:29:23.776 END TEST kernel_target_abort 00:29:23.776 ************************************ 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:23.776 rmmod nvme_tcp 00:29:23.776 rmmod nvme_fabrics 00:29:23.776 rmmod nvme_keyring 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 269878 ']' 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 269878 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 269878 ']' 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 269878 00:29:23.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (269878) - No such process 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 269878 is not found' 00:29:23.776 Process with pid 269878 is not found 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:23.776 17:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:26.340 Waiting for block devices as requested 00:29:26.340 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:26.340 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:26.639 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:26.639 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:26.639 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:26.639 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:26.898 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:26.898 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:26.898 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:26.898 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:27.155 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:27.156 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:27.156 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:27.156 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:27.414 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:27.414 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:27.414 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:27.673 17:11:34 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:27.673 17:11:34 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:27.673 17:11:34 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:27.673 17:11:34 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:27.673 17:11:34 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.673 17:11:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:27.673 17:11:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.576 17:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:29.576 00:29:29.576 real 0m46.334s 00:29:29.576 user 1m7.709s 00:29:29.576 sys 0m14.743s 00:29:29.576 17:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:29.576 17:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:29.576 ************************************ 00:29:29.576 END TEST nvmf_abort_qd_sizes 00:29:29.576 ************************************ 00:29:29.576 17:11:36 -- common/autotest_common.sh@1142 -- # return 0 00:29:29.576 17:11:36 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:29.576 17:11:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:29.576 17:11:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:29.576 17:11:36 -- common/autotest_common.sh@10 -- # set +x 00:29:29.576 ************************************ 00:29:29.576 START TEST keyring_file 00:29:29.576 ************************************ 00:29:29.576 17:11:36 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:29.835 * Looking for test storage... 00:29:29.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:29.835 17:11:36 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.835 17:11:36 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.835 17:11:36 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.835 17:11:36 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.835 17:11:36 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.835 17:11:36 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.835 17:11:36 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.835 17:11:36 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:29.835 17:11:36 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:29.835 17:11:36 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:29.835 17:11:36 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:29.835 17:11:36 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:29.835 17:11:36 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:29.835 17:11:36 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:29.835 17:11:36 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FTHcXTdrjK 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FTHcXTdrjK 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FTHcXTdrjK 00:29:29.835 17:11:36 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.FTHcXTdrjK 00:29:29.835 17:11:36 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.060DJZdB5A 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:29.835 17:11:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.060DJZdB5A 00:29:29.835 17:11:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.060DJZdB5A 00:29:29.835 17:11:36 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.060DJZdB5A 00:29:29.835 17:11:36 keyring_file -- keyring/file.sh@30 -- # tgtpid=278510 00:29:29.835 17:11:36 keyring_file -- keyring/file.sh@32 -- # waitforlisten 278510 00:29:29.835 17:11:36 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:29.835 17:11:36 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 278510 ']' 00:29:29.835 17:11:36 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.835 17:11:36 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:29.835 17:11:36 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.835 17:11:36 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:29.835 17:11:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:30.094 [2024-07-15 17:11:36.504432] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:29:30.094 [2024-07-15 17:11:36.504480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278510 ] 00:29:30.094 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.094 [2024-07-15 17:11:36.556113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.094 [2024-07-15 17:11:36.628482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.661 17:11:37 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:30.661 17:11:37 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:30.661 17:11:37 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:30.661 17:11:37 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.661 17:11:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:30.661 [2024-07-15 17:11:37.312353] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.920 null0 00:29:30.920 [2024-07-15 17:11:37.344374] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:30.920 [2024-07-15 17:11:37.344575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:30.920 [2024-07-15 17:11:37.352389] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:30.920 17:11:37 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.920 17:11:37 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:30.920 17:11:37 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:30.920 17:11:37 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:30.920 17:11:37 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:30.920 17:11:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:30.920 17:11:37 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:30.920 17:11:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:30.920 17:11:37 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:30.920 17:11:37 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.920 17:11:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:30.920 [2024-07-15 17:11:37.364419] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:30.920 request: 00:29:30.920 { 00:29:30.920 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:30.920 "secure_channel": false, 00:29:30.920 "listen_address": { 00:29:30.920 "trtype": "tcp", 00:29:30.920 "traddr": "127.0.0.1", 00:29:30.920 "trsvcid": "4420" 00:29:30.920 }, 00:29:30.920 "method": "nvmf_subsystem_add_listener", 00:29:30.921 "req_id": 1 00:29:30.921 } 00:29:30.921 Got JSON-RPC error response 00:29:30.921 response: 00:29:30.921 { 00:29:30.921 "code": -32602, 00:29:30.921 "message": "Invalid parameters" 00:29:30.921 } 00:29:30.921 17:11:37 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:30.921 17:11:37 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:30.921 17:11:37 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:30.921 17:11:37 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:30.921 17:11:37 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:30.921 17:11:37 keyring_file -- keyring/file.sh@46 -- # bperfpid=278535 00:29:30.921 17:11:37 keyring_file -- keyring/file.sh@48 -- # waitforlisten 278535 /var/tmp/bperf.sock 00:29:30.921 17:11:37 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 278535 ']' 00:29:30.921 17:11:37 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:30.921 17:11:37 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:30.921 17:11:37 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:30.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:30.921 17:11:37 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:30.921 17:11:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:30.921 17:11:37 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:30.921 [2024-07-15 17:11:37.411551] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:29:30.921 [2024-07-15 17:11:37.411594] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278535 ] 00:29:30.921 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.921 [2024-07-15 17:11:37.464844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.921 [2024-07-15 17:11:37.543991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.859 17:11:38 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:31.859 17:11:38 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:31.859 17:11:38 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FTHcXTdrjK 00:29:31.859 17:11:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FTHcXTdrjK 00:29:31.859 17:11:38 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.060DJZdB5A 00:29:31.859 17:11:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.060DJZdB5A 00:29:32.116 17:11:38 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:32.116 17:11:38 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:32.116 17:11:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.116 17:11:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.116 17:11:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:32.116 17:11:38 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.FTHcXTdrjK == \/\t\m\p\/\t\m\p\.\F\T\H\c\X\T\d\r\j\K ]] 00:29:32.116 17:11:38 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:32.116 17:11:38 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:32.116 17:11:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.116 17:11:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.116 17:11:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:32.374 17:11:38 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.060DJZdB5A == \/\t\m\p\/\t\m\p\.\0\6\0\D\J\Z\d\B\5\A ]] 00:29:32.374 17:11:38 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:32.374 17:11:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:32.374 17:11:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.374 17:11:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.374 17:11:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.374 17:11:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:32.633 17:11:39 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:32.633 17:11:39 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:32.633 17:11:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:32.633 17:11:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.633 17:11:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.633 17:11:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.633 17:11:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:32.633 17:11:39 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:32.633 17:11:39 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:32.633 17:11:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:32.891 [2024-07-15 17:11:39.417177] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:32.891 nvme0n1 00:29:32.891 17:11:39 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:32.891 17:11:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:32.891 17:11:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.891 17:11:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.891 17:11:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:32.891 17:11:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.149 17:11:39 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:33.149 17:11:39 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:33.149 17:11:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:33.149 17:11:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:33.149 17:11:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:33.149 17:11:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:33.149 17:11:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.407 17:11:39 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:33.407 17:11:39 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:33.407 Running I/O for 1 seconds... 00:29:34.340 00:29:34.340 Latency(us) 00:29:34.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.340 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:34.340 nvme0n1 : 1.01 14068.85 54.96 0.00 0.00 9054.46 3433.52 13734.07 00:29:34.340 =================================================================================================================== 00:29:34.340 Total : 14068.85 54.96 0.00 0.00 9054.46 3433.52 13734.07 00:29:34.340 0 00:29:34.340 17:11:40 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:34.340 17:11:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:34.597 17:11:41 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:34.597 17:11:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:34.597 17:11:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:34.597 17:11:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.597 17:11:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.597 17:11:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:34.855 17:11:41 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:34.855 17:11:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:34.855 17:11:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:34.855 17:11:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:34.855 17:11:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.855 17:11:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:34.855 17:11:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.855 17:11:41 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:34.855 17:11:41 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:34.855 17:11:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:34.855 17:11:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:34.855 17:11:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:34.855 17:11:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:34.855 17:11:41 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:34.855 17:11:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:34.855 17:11:41 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:34.855 17:11:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:35.113 [2024-07-15 17:11:41.689052] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:35.113 [2024-07-15 17:11:41.689985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1050770 (107): Transport endpoint is not connected 00:29:35.113 [2024-07-15 17:11:41.690980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1050770 (9): Bad file descriptor 00:29:35.113 [2024-07-15 17:11:41.691981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:35.113 [2024-07-15 17:11:41.691990] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:35.113 [2024-07-15 17:11:41.691996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:35.113 request: 00:29:35.113 { 00:29:35.113 "name": "nvme0", 00:29:35.113 "trtype": "tcp", 00:29:35.113 "traddr": "127.0.0.1", 00:29:35.113 "adrfam": "ipv4", 00:29:35.113 "trsvcid": "4420", 00:29:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.113 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:35.113 "prchk_reftag": false, 00:29:35.113 "prchk_guard": false, 00:29:35.113 "hdgst": false, 00:29:35.113 "ddgst": false, 00:29:35.113 "psk": "key1", 00:29:35.113 "method": "bdev_nvme_attach_controller", 00:29:35.113 "req_id": 1 00:29:35.113 } 00:29:35.113 Got JSON-RPC error response 00:29:35.113 response: 00:29:35.113 { 00:29:35.113 "code": -5, 00:29:35.113 "message": "Input/output error" 00:29:35.113 } 00:29:35.113 17:11:41 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:35.113 17:11:41 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:35.113 17:11:41 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:35.113 17:11:41 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:35.113 17:11:41 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:35.113 17:11:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:35.113 17:11:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:35.113 17:11:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:35.113 17:11:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:35.113 17:11:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.371 17:11:41 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:35.371 17:11:41 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:35.371 17:11:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:35.371 17:11:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:35.371 17:11:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:35.371 17:11:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:35.371 17:11:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.630 17:11:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:35.630 17:11:42 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:35.630 17:11:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:35.630 17:11:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:35.630 17:11:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:35.890 17:11:42 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:35.890 17:11:42 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:35.890 17:11:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.148 17:11:42 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:36.148 17:11:42 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.FTHcXTdrjK 00:29:36.148 17:11:42 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.FTHcXTdrjK 00:29:36.148 17:11:42 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:36.148 17:11:42 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.FTHcXTdrjK 00:29:36.148 17:11:42 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:36.148 17:11:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:36.148 17:11:42 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:36.148 17:11:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:36.148 17:11:42 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FTHcXTdrjK 00:29:36.149 17:11:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FTHcXTdrjK 00:29:36.149 [2024-07-15 17:11:42.732041] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FTHcXTdrjK': 0100660 00:29:36.149 [2024-07-15 17:11:42.732064] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:36.149 request: 00:29:36.149 { 00:29:36.149 "name": "key0", 00:29:36.149 "path": "/tmp/tmp.FTHcXTdrjK", 00:29:36.149 "method": "keyring_file_add_key", 00:29:36.149 "req_id": 1 00:29:36.149 } 00:29:36.149 Got JSON-RPC error response 00:29:36.149 response: 00:29:36.149 { 00:29:36.149 "code": -1, 00:29:36.149 "message": "Operation not permitted" 00:29:36.149 } 00:29:36.149 17:11:42 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:36.149 17:11:42 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:36.149 17:11:42 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:36.149 17:11:42 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:36.149 17:11:42 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.FTHcXTdrjK 00:29:36.149 17:11:42 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FTHcXTdrjK 00:29:36.149 17:11:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FTHcXTdrjK 00:29:36.407 17:11:42 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.FTHcXTdrjK 00:29:36.407 17:11:42 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:36.407 17:11:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:36.408 17:11:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:36.408 17:11:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:36.408 17:11:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:36.408 17:11:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.667 17:11:43 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:36.667 17:11:43 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:36.667 17:11:43 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:36.667 17:11:43 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:36.667 17:11:43 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:36.667 17:11:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:36.667 17:11:43 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:36.667 17:11:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:36.667 17:11:43 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:36.667 17:11:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:36.667 [2024-07-15 17:11:43.253431] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.FTHcXTdrjK': No such file or directory 00:29:36.667 [2024-07-15 17:11:43.253447] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:36.667 [2024-07-15 17:11:43.253468] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:36.667 [2024-07-15 17:11:43.253473] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:36.667 [2024-07-15 17:11:43.253486] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:36.667 request: 00:29:36.667 { 00:29:36.667 "name": "nvme0", 00:29:36.667 "trtype": "tcp", 00:29:36.667 "traddr": "127.0.0.1", 00:29:36.667 "adrfam": "ipv4", 00:29:36.667 "trsvcid": "4420", 00:29:36.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:36.667 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:36.667 "prchk_reftag": false, 00:29:36.667 "prchk_guard": false, 00:29:36.667 "hdgst": false, 00:29:36.667 "ddgst": false, 00:29:36.667 "psk": "key0", 00:29:36.667 "method": "bdev_nvme_attach_controller", 00:29:36.667 "req_id": 1 00:29:36.667 } 00:29:36.667 Got JSON-RPC error response 00:29:36.667 response: 00:29:36.667 { 00:29:36.667 "code": -19, 00:29:36.667 "message": "No such device" 00:29:36.667 } 00:29:36.667 17:11:43 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:36.667 17:11:43 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:36.667 17:11:43 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:36.667 17:11:43 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:36.667 17:11:43 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:36.667 17:11:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:36.925 17:11:43 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:36.925 17:11:43 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:36.925 17:11:43 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:36.925 17:11:43 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:36.925 17:11:43 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:36.925 17:11:43 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:36.925 17:11:43 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9WSE0uT9WJ 00:29:36.925 17:11:43 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:36.925 17:11:43 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:36.925 17:11:43 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:36.925 17:11:43 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:36.925 17:11:43 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:36.925 17:11:43 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:36.925 17:11:43 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:36.925 17:11:43 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9WSE0uT9WJ 00:29:36.925 17:11:43 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9WSE0uT9WJ 00:29:36.925 17:11:43 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.9WSE0uT9WJ 00:29:36.925 17:11:43 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9WSE0uT9WJ 00:29:36.925 17:11:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9WSE0uT9WJ 00:29:37.183 17:11:43 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:37.183 17:11:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:37.442 nvme0n1 00:29:37.442 17:11:43 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:37.442 17:11:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:37.442 17:11:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.442 17:11:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.442 17:11:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:37.442 17:11:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.442 17:11:44 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:37.442 17:11:44 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:37.442 17:11:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:37.701 17:11:44 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:37.701 17:11:44 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:37.701 17:11:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.701 17:11:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.701 17:11:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:37.960 17:11:44 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:37.960 17:11:44 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:37.960 17:11:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:37.960 17:11:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.960 17:11:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.960 17:11:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:37.960 17:11:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.960 17:11:44 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:37.960 17:11:44 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:37.960 17:11:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:38.219 17:11:44 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:38.219 17:11:44 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:38.219 17:11:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.478 17:11:44 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:38.478 17:11:44 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9WSE0uT9WJ 00:29:38.478 17:11:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9WSE0uT9WJ 00:29:38.737 17:11:45 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.060DJZdB5A 00:29:38.737 17:11:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.060DJZdB5A 00:29:38.737 17:11:45 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:38.737 17:11:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:38.996 nvme0n1 00:29:38.996 17:11:45 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:38.996 17:11:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:39.255 17:11:45 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:39.255 "subsystems": [ 00:29:39.255 { 00:29:39.255 "subsystem": "keyring", 00:29:39.255 "config": [ 00:29:39.255 { 00:29:39.255 "method": "keyring_file_add_key", 00:29:39.255 "params": { 00:29:39.255 "name": "key0", 00:29:39.255 "path": "/tmp/tmp.9WSE0uT9WJ" 00:29:39.255 } 00:29:39.255 }, 00:29:39.255 { 00:29:39.255 "method": "keyring_file_add_key", 00:29:39.255 "params": { 00:29:39.255 "name": "key1", 00:29:39.255 "path": "/tmp/tmp.060DJZdB5A" 00:29:39.255 } 00:29:39.255 } 00:29:39.255 ] 00:29:39.255 }, 00:29:39.255 { 00:29:39.255 "subsystem": "iobuf", 00:29:39.255 "config": [ 00:29:39.255 { 00:29:39.255 "method": "iobuf_set_options", 00:29:39.255 "params": { 00:29:39.255 "small_pool_count": 8192, 00:29:39.255 "large_pool_count": 1024, 00:29:39.255 "small_bufsize": 8192, 00:29:39.255 "large_bufsize": 135168 00:29:39.255 } 00:29:39.255 } 00:29:39.255 ] 00:29:39.255 }, 00:29:39.255 { 00:29:39.255 "subsystem": "sock", 00:29:39.255 "config": [ 00:29:39.255 { 00:29:39.255 "method": "sock_set_default_impl", 00:29:39.255 "params": { 00:29:39.255 "impl_name": "posix" 00:29:39.255 } 00:29:39.256 }, 00:29:39.256 { 00:29:39.256 "method": "sock_impl_set_options", 00:29:39.256 "params": { 00:29:39.256 "impl_name": "ssl", 00:29:39.256 "recv_buf_size": 4096, 00:29:39.256 "send_buf_size": 4096, 00:29:39.256 "enable_recv_pipe": true, 00:29:39.256 "enable_quickack": false, 00:29:39.256 "enable_placement_id": 0, 00:29:39.256 "enable_zerocopy_send_server": true, 00:29:39.256 "enable_zerocopy_send_client": false, 00:29:39.256 "zerocopy_threshold": 0, 00:29:39.256 "tls_version": 0, 00:29:39.256 "enable_ktls": false 00:29:39.256 } 00:29:39.256 }, 00:29:39.256 { 00:29:39.256 "method": "sock_impl_set_options", 00:29:39.256 "params": { 00:29:39.256 "impl_name": "posix", 00:29:39.256 "recv_buf_size": 2097152, 00:29:39.256 "send_buf_size": 2097152, 00:29:39.256 "enable_recv_pipe": true, 00:29:39.256 "enable_quickack": false, 00:29:39.256 "enable_placement_id": 0, 00:29:39.256 "enable_zerocopy_send_server": true, 00:29:39.256 "enable_zerocopy_send_client": false, 00:29:39.256 "zerocopy_threshold": 0, 00:29:39.256 "tls_version": 0, 00:29:39.256 "enable_ktls": false 00:29:39.256 } 00:29:39.256 } 00:29:39.256 ] 00:29:39.256 }, 00:29:39.256 { 00:29:39.256 "subsystem": "vmd", 00:29:39.256 "config": [] 00:29:39.256 }, 00:29:39.256 { 00:29:39.256 "subsystem": "accel", 00:29:39.256 "config": [ 00:29:39.256 { 00:29:39.256 "method": "accel_set_options", 00:29:39.256 "params": { 00:29:39.256 "small_cache_size": 128, 00:29:39.256 "large_cache_size": 16, 00:29:39.256 "task_count": 2048, 00:29:39.256 "sequence_count": 2048, 00:29:39.256 "buf_count": 2048 00:29:39.256 } 00:29:39.256 } 00:29:39.256 ] 00:29:39.256 }, 00:29:39.256 { 00:29:39.256 "subsystem": "bdev", 00:29:39.256 "config": [ 00:29:39.256 { 00:29:39.256 "method": "bdev_set_options", 00:29:39.256 "params": { 00:29:39.256 "bdev_io_pool_size": 65535, 00:29:39.256 "bdev_io_cache_size": 256, 00:29:39.256 "bdev_auto_examine": true, 00:29:39.256 "iobuf_small_cache_size": 128, 00:29:39.256 "iobuf_large_cache_size": 16 00:29:39.256 } 00:29:39.256 }, 00:29:39.256 { 00:29:39.256 "method": "bdev_raid_set_options", 00:29:39.256 "params": { 00:29:39.256 "process_window_size_kb": 1024 00:29:39.256 } 00:29:39.256 }, 00:29:39.256 { 00:29:39.256 "method": "bdev_iscsi_set_options", 00:29:39.256 "params": { 00:29:39.256 "timeout_sec": 30 00:29:39.256 } 00:29:39.256 }, 00:29:39.256 { 00:29:39.256 "method": "bdev_nvme_set_options", 00:29:39.256 "params": { 00:29:39.256 "action_on_timeout": "none", 00:29:39.256 "timeout_us": 0, 00:29:39.256 "timeout_admin_us": 0, 00:29:39.256 "keep_alive_timeout_ms": 10000, 00:29:39.256 "arbitration_burst": 0, 00:29:39.256 "low_priority_weight": 0, 00:29:39.256 "medium_priority_weight": 0, 00:29:39.256 "high_priority_weight": 0, 00:29:39.256 "nvme_adminq_poll_period_us": 10000, 00:29:39.256 "nvme_ioq_poll_period_us": 0, 00:29:39.256 "io_queue_requests": 512, 00:29:39.256 "delay_cmd_submit": true, 00:29:39.256 "transport_retry_count": 4, 00:29:39.256 "bdev_retry_count": 3, 00:29:39.256 "transport_ack_timeout": 0, 00:29:39.256 "ctrlr_loss_timeout_sec": 0, 00:29:39.256 "reconnect_delay_sec": 0, 00:29:39.256 "fast_io_fail_timeout_sec": 0, 00:29:39.256 "disable_auto_failback": false, 00:29:39.256 "generate_uuids": false, 00:29:39.256 "transport_tos": 0, 00:29:39.256 "nvme_error_stat": false, 00:29:39.256 "rdma_srq_size": 0, 00:29:39.256 "io_path_stat": false, 00:29:39.256 "allow_accel_sequence": false, 00:29:39.256 "rdma_max_cq_size": 0, 00:29:39.256 "rdma_cm_event_timeout_ms": 0, 00:29:39.256 "dhchap_digests": [ 00:29:39.256 "sha256", 00:29:39.256 "sha384", 00:29:39.256 "sha512" 00:29:39.256 ], 00:29:39.256 "dhchap_dhgroups": [ 00:29:39.256 "null", 00:29:39.256 "ffdhe2048", 00:29:39.256 "ffdhe3072", 00:29:39.256 "ffdhe4096", 00:29:39.256 "ffdhe6144", 00:29:39.256 "ffdhe8192" 00:29:39.256 ] 00:29:39.256 } 00:29:39.256 }, 00:29:39.256 { 00:29:39.256 "method": "bdev_nvme_attach_controller", 00:29:39.256 "params": { 00:29:39.256 "name": "nvme0", 00:29:39.256 "trtype": "TCP", 00:29:39.256 "adrfam": "IPv4", 00:29:39.256 "traddr": "127.0.0.1", 00:29:39.256 "trsvcid": "4420", 00:29:39.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:39.256 "prchk_reftag": false, 00:29:39.256 "prchk_guard": false, 00:29:39.256 "ctrlr_loss_timeout_sec": 0, 00:29:39.256 "reconnect_delay_sec": 0, 00:29:39.256 "fast_io_fail_timeout_sec": 0, 00:29:39.256 "psk": "key0", 00:29:39.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:39.256 "hdgst": false, 00:29:39.256 "ddgst": false 00:29:39.256 } 00:29:39.256 }, 00:29:39.256 { 00:29:39.256 "method": "bdev_nvme_set_hotplug", 00:29:39.256 "params": { 00:29:39.256 "period_us": 100000, 00:29:39.256 "enable": false 00:29:39.256 } 00:29:39.256 }, 00:29:39.256 { 00:29:39.256 "method": "bdev_wait_for_examine" 00:29:39.256 } 00:29:39.256 ] 00:29:39.256 }, 00:29:39.256 { 00:29:39.256 "subsystem": "nbd", 00:29:39.256 "config": [] 00:29:39.256 } 00:29:39.256 ] 00:29:39.256 }' 00:29:39.256 17:11:45 keyring_file -- keyring/file.sh@114 -- # killprocess 278535 00:29:39.256 17:11:45 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 278535 ']' 00:29:39.256 17:11:45 keyring_file -- common/autotest_common.sh@952 -- # kill -0 278535 00:29:39.256 17:11:45 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:39.256 17:11:45 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:39.256 17:11:45 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 278535 00:29:39.256 17:11:45 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:39.256 17:11:45 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:39.256 17:11:45 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 278535' 00:29:39.256 killing process with pid 278535 00:29:39.256 17:11:45 keyring_file -- common/autotest_common.sh@967 -- # kill 278535 00:29:39.256 Received shutdown signal, test time was about 1.000000 seconds 00:29:39.256 00:29:39.256 Latency(us) 00:29:39.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.257 =================================================================================================================== 00:29:39.257 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:39.257 17:11:45 keyring_file -- common/autotest_common.sh@972 -- # wait 278535 00:29:39.517 17:11:46 keyring_file -- keyring/file.sh@117 -- # bperfpid=280059 00:29:39.517 17:11:46 keyring_file -- keyring/file.sh@119 -- # waitforlisten 280059 /var/tmp/bperf.sock 00:29:39.517 17:11:46 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 280059 ']' 00:29:39.517 17:11:46 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:39.517 17:11:46 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:39.517 17:11:46 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:39.517 17:11:46 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:39.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:39.517 17:11:46 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:39.517 "subsystems": [ 00:29:39.517 { 00:29:39.517 "subsystem": "keyring", 00:29:39.517 "config": [ 00:29:39.517 { 00:29:39.517 "method": "keyring_file_add_key", 00:29:39.517 "params": { 00:29:39.517 "name": "key0", 00:29:39.517 "path": "/tmp/tmp.9WSE0uT9WJ" 00:29:39.517 } 00:29:39.517 }, 00:29:39.517 { 00:29:39.517 "method": "keyring_file_add_key", 00:29:39.517 "params": { 00:29:39.517 "name": "key1", 00:29:39.517 "path": "/tmp/tmp.060DJZdB5A" 00:29:39.517 } 00:29:39.517 } 00:29:39.517 ] 00:29:39.517 }, 00:29:39.517 { 00:29:39.517 "subsystem": "iobuf", 00:29:39.517 "config": [ 00:29:39.517 { 00:29:39.517 "method": "iobuf_set_options", 00:29:39.517 "params": { 00:29:39.517 "small_pool_count": 8192, 00:29:39.517 "large_pool_count": 1024, 00:29:39.517 "small_bufsize": 8192, 00:29:39.517 "large_bufsize": 135168 00:29:39.517 } 00:29:39.517 } 00:29:39.517 ] 00:29:39.517 }, 00:29:39.517 { 00:29:39.517 "subsystem": "sock", 00:29:39.517 "config": [ 00:29:39.517 { 00:29:39.517 "method": "sock_set_default_impl", 00:29:39.517 "params": { 00:29:39.517 "impl_name": "posix" 00:29:39.517 } 00:29:39.517 }, 00:29:39.517 { 00:29:39.517 "method": "sock_impl_set_options", 00:29:39.517 "params": { 00:29:39.517 "impl_name": "ssl", 00:29:39.517 "recv_buf_size": 4096, 00:29:39.517 "send_buf_size": 4096, 00:29:39.517 "enable_recv_pipe": true, 00:29:39.517 "enable_quickack": false, 00:29:39.517 "enable_placement_id": 0, 00:29:39.517 "enable_zerocopy_send_server": true, 00:29:39.517 "enable_zerocopy_send_client": false, 00:29:39.517 "zerocopy_threshold": 0, 00:29:39.517 "tls_version": 0, 00:29:39.517 "enable_ktls": false 00:29:39.517 } 00:29:39.517 }, 00:29:39.517 { 00:29:39.517 "method": "sock_impl_set_options", 00:29:39.517 "params": { 00:29:39.517 "impl_name": "posix", 00:29:39.517 "recv_buf_size": 2097152, 00:29:39.517 "send_buf_size": 2097152, 00:29:39.517 "enable_recv_pipe": true, 00:29:39.517 "enable_quickack": false, 00:29:39.517 "enable_placement_id": 0, 00:29:39.517 "enable_zerocopy_send_server": true, 00:29:39.517 "enable_zerocopy_send_client": false, 00:29:39.517 "zerocopy_threshold": 0, 00:29:39.517 "tls_version": 0, 00:29:39.517 "enable_ktls": false 00:29:39.517 } 00:29:39.517 } 00:29:39.517 ] 00:29:39.517 }, 00:29:39.517 { 00:29:39.517 "subsystem": "vmd", 00:29:39.517 "config": [] 00:29:39.517 }, 00:29:39.517 { 00:29:39.517 "subsystem": "accel", 00:29:39.517 "config": [ 00:29:39.517 { 00:29:39.517 "method": "accel_set_options", 00:29:39.517 "params": { 00:29:39.517 "small_cache_size": 128, 00:29:39.517 "large_cache_size": 16, 00:29:39.517 "task_count": 2048, 00:29:39.517 "sequence_count": 2048, 00:29:39.517 "buf_count": 2048 00:29:39.517 } 00:29:39.517 } 00:29:39.517 ] 00:29:39.517 }, 00:29:39.517 { 00:29:39.517 "subsystem": "bdev", 00:29:39.517 "config": [ 00:29:39.517 { 00:29:39.517 "method": "bdev_set_options", 00:29:39.517 "params": { 00:29:39.517 "bdev_io_pool_size": 65535, 00:29:39.517 "bdev_io_cache_size": 256, 00:29:39.517 "bdev_auto_examine": true, 00:29:39.517 "iobuf_small_cache_size": 128, 00:29:39.517 "iobuf_large_cache_size": 16 00:29:39.517 } 00:29:39.517 }, 00:29:39.517 { 00:29:39.517 "method": "bdev_raid_set_options", 00:29:39.517 "params": { 00:29:39.517 "process_window_size_kb": 1024 00:29:39.517 } 00:29:39.517 }, 00:29:39.517 { 00:29:39.517 "method": "bdev_iscsi_set_options", 00:29:39.517 "params": { 00:29:39.517 "timeout_sec": 30 00:29:39.517 } 00:29:39.517 }, 00:29:39.517 { 00:29:39.517 "method": "bdev_nvme_set_options", 00:29:39.517 "params": { 00:29:39.517 "action_on_timeout": "none", 00:29:39.517 "timeout_us": 0, 00:29:39.517 "timeout_admin_us": 0, 00:29:39.517 "keep_alive_timeout_ms": 10000, 00:29:39.517 "arbitration_burst": 0, 00:29:39.517 "low_priority_weight": 0, 00:29:39.517 "medium_priority_weight": 0, 00:29:39.517 "high_priority_weight": 0, 00:29:39.517 "nvme_adminq_poll_period_us": 10000, 00:29:39.517 "nvme_ioq_poll_period_us": 0, 00:29:39.517 "io_queue_requests": 512, 00:29:39.517 "delay_cmd_submit": true, 00:29:39.517 "transport_retry_count": 4, 00:29:39.517 "bdev_retry_count": 3, 00:29:39.517 "transport_ack_timeout": 0, 00:29:39.517 "ctrlr_loss_timeout_sec": 0, 00:29:39.517 "reconnect_delay_sec": 0, 00:29:39.517 "fast_io_fail_timeout_sec": 0, 00:29:39.517 "disable_auto_failback": false, 00:29:39.517 "generate_uuids": false, 00:29:39.517 "transport_tos": 0, 00:29:39.517 "nvme_error_stat": false, 00:29:39.517 "rdma_srq_size": 0, 00:29:39.517 "io_path_stat": false, 00:29:39.517 "allow_accel_sequence": false, 00:29:39.518 "rdma_max_cq_size": 0, 00:29:39.518 "rdma_cm_event_timeout_ms": 0, 00:29:39.518 "dhchap_digests": [ 00:29:39.518 "sha256", 00:29:39.518 "sha384", 00:29:39.518 "sha512" 00:29:39.518 ], 00:29:39.518 "dhchap_dhgroups": [ 00:29:39.518 "null", 00:29:39.518 "ffdhe2048", 00:29:39.518 "ffdhe3072", 00:29:39.518 "ffdhe4096", 00:29:39.518 "ffdhe6144", 00:29:39.518 "ffdhe8192" 00:29:39.518 ] 00:29:39.518 } 00:29:39.518 }, 00:29:39.518 { 00:29:39.518 "method": "bdev_nvme_attach_controller", 00:29:39.518 "params": { 00:29:39.518 "name": "nvme0", 00:29:39.518 "trtype": "TCP", 00:29:39.518 "adrfam": "IPv4", 00:29:39.518 "traddr": "127.0.0.1", 00:29:39.518 "trsvcid": "4420", 00:29:39.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:39.518 "prchk_reftag": false, 00:29:39.518 "prchk_guard": false, 00:29:39.518 "ctrlr_loss_timeout_sec": 0, 00:29:39.518 "reconnect_delay_sec": 0, 00:29:39.518 "fast_io_fail_timeout_sec": 0, 00:29:39.518 "psk": "key0", 00:29:39.518 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:39.518 "hdgst": false, 00:29:39.518 "ddgst": false 00:29:39.518 } 00:29:39.518 }, 00:29:39.518 { 00:29:39.518 "method": "bdev_nvme_set_hotplug", 00:29:39.518 "params": { 00:29:39.518 "period_us": 100000, 00:29:39.518 "enable": false 00:29:39.518 } 00:29:39.518 }, 00:29:39.518 { 00:29:39.518 "method": "bdev_wait_for_examine" 00:29:39.518 } 00:29:39.518 ] 00:29:39.518 }, 00:29:39.518 { 00:29:39.518 "subsystem": "nbd", 00:29:39.518 "config": [] 00:29:39.518 } 00:29:39.518 ] 00:29:39.518 }' 00:29:39.518 17:11:46 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:39.518 17:11:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:39.518 [2024-07-15 17:11:46.086998] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:29:39.518 [2024-07-15 17:11:46.087052] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280059 ] 00:29:39.518 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.518 [2024-07-15 17:11:46.141425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.777 [2024-07-15 17:11:46.213701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.777 [2024-07-15 17:11:46.371903] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:40.345 17:11:46 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:40.345 17:11:46 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:40.345 17:11:46 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:40.345 17:11:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.345 17:11:46 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:40.604 17:11:47 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:40.604 17:11:47 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:40.604 17:11:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:40.604 17:11:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:40.604 17:11:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.604 17:11:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:40.604 17:11:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.604 17:11:47 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:40.604 17:11:47 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:40.604 17:11:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:40.604 17:11:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:40.605 17:11:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:40.605 17:11:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.605 17:11:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.864 17:11:47 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:40.864 17:11:47 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:40.864 17:11:47 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:40.864 17:11:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:41.123 17:11:47 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:41.123 17:11:47 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:41.123 17:11:47 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.9WSE0uT9WJ /tmp/tmp.060DJZdB5A 00:29:41.123 17:11:47 keyring_file -- keyring/file.sh@20 -- # killprocess 280059 00:29:41.123 17:11:47 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 280059 ']' 00:29:41.123 17:11:47 keyring_file -- common/autotest_common.sh@952 -- # kill -0 280059 00:29:41.123 17:11:47 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:41.123 17:11:47 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:41.123 17:11:47 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 280059 00:29:41.123 17:11:47 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:41.123 17:11:47 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:41.123 17:11:47 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 280059' 00:29:41.123 killing process with pid 280059 00:29:41.124 17:11:47 keyring_file -- common/autotest_common.sh@967 -- # kill 280059 00:29:41.124 Received shutdown signal, test time was about 1.000000 seconds 00:29:41.124 00:29:41.124 Latency(us) 00:29:41.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.124 =================================================================================================================== 00:29:41.124 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:41.124 17:11:47 keyring_file -- common/autotest_common.sh@972 -- # wait 280059 00:29:41.383 17:11:47 keyring_file -- keyring/file.sh@21 -- # killprocess 278510 00:29:41.383 17:11:47 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 278510 ']' 00:29:41.383 17:11:47 keyring_file -- common/autotest_common.sh@952 -- # kill -0 278510 00:29:41.383 17:11:47 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:41.383 17:11:47 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:41.383 17:11:47 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 278510 00:29:41.383 17:11:47 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:41.383 17:11:47 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:41.383 17:11:47 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 278510' 00:29:41.383 killing process with pid 278510 00:29:41.383 17:11:47 keyring_file -- common/autotest_common.sh@967 -- # kill 278510 00:29:41.383 [2024-07-15 17:11:47.883813] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:41.383 17:11:47 keyring_file -- common/autotest_common.sh@972 -- # wait 278510 00:29:41.643 00:29:41.643 real 0m11.964s 00:29:41.643 user 0m28.362s 00:29:41.643 sys 0m2.693s 00:29:41.643 17:11:48 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:41.643 17:11:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:41.643 ************************************ 00:29:41.643 END TEST keyring_file 00:29:41.643 ************************************ 00:29:41.643 17:11:48 -- common/autotest_common.sh@1142 -- # return 0 00:29:41.643 17:11:48 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:29:41.643 17:11:48 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:41.643 17:11:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:41.643 17:11:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:41.643 17:11:48 -- common/autotest_common.sh@10 -- # set +x 00:29:41.643 ************************************ 00:29:41.643 START TEST keyring_linux 00:29:41.643 ************************************ 00:29:41.643 17:11:48 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:41.903 * Looking for test storage... 00:29:41.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:41.903 17:11:48 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:41.903 17:11:48 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.903 17:11:48 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.903 17:11:48 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.903 17:11:48 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.903 17:11:48 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.903 17:11:48 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.903 17:11:48 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.903 17:11:48 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:41.903 17:11:48 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:41.903 17:11:48 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:41.903 17:11:48 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:41.904 17:11:48 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:41.904 17:11:48 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:41.904 17:11:48 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:41.904 17:11:48 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:41.904 17:11:48 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:41.904 17:11:48 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:41.904 17:11:48 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:41.904 17:11:48 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:41.904 17:11:48 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:41.904 17:11:48 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:41.904 17:11:48 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:41.904 17:11:48 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:41.904 /tmp/:spdk-test:key0 00:29:41.904 17:11:48 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:41.904 17:11:48 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:41.904 17:11:48 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:41.904 17:11:48 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:41.904 17:11:48 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:41.904 17:11:48 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:41.904 17:11:48 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:41.904 17:11:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:41.904 /tmp/:spdk-test:key1 00:29:41.904 17:11:48 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=280590 00:29:41.904 17:11:48 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 280590 00:29:41.904 17:11:48 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:41.904 17:11:48 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 280590 ']' 00:29:41.904 17:11:48 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.904 17:11:48 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:41.904 17:11:48 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.904 17:11:48 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:41.904 17:11:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:41.904 [2024-07-15 17:11:48.511428] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:29:41.904 [2024-07-15 17:11:48.511477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280590 ] 00:29:41.904 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.904 [2024-07-15 17:11:48.562837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.198 [2024-07-15 17:11:48.642900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.765 17:11:49 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:42.765 17:11:49 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:42.765 17:11:49 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:42.765 17:11:49 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.765 17:11:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:42.765 [2024-07-15 17:11:49.306156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.765 null0 00:29:42.765 [2024-07-15 17:11:49.338209] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:42.765 [2024-07-15 17:11:49.338541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:42.765 17:11:49 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.765 17:11:49 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:42.765 234744700 00:29:42.765 17:11:49 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:42.765 191035675 00:29:42.765 17:11:49 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=280815 00:29:42.765 17:11:49 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 280815 /var/tmp/bperf.sock 00:29:42.765 17:11:49 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:42.766 17:11:49 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 280815 ']' 00:29:42.766 17:11:49 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:42.766 17:11:49 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:42.766 17:11:49 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:42.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:42.766 17:11:49 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:42.766 17:11:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:42.766 [2024-07-15 17:11:49.406353] Starting SPDK v24.09-pre git sha1 44e72e4e7 / DPDK 24.03.0 initialization... 00:29:42.766 [2024-07-15 17:11:49.406394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280815 ] 00:29:42.766 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.024 [2024-07-15 17:11:49.459180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.024 [2024-07-15 17:11:49.530695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.627 17:11:50 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:43.627 17:11:50 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:43.627 17:11:50 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:43.627 17:11:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:43.886 17:11:50 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:43.886 17:11:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:44.146 17:11:50 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:44.146 17:11:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:44.146 [2024-07-15 17:11:50.778188] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:44.405 nvme0n1 00:29:44.405 17:11:50 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:44.405 17:11:50 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:44.405 17:11:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:44.405 17:11:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:44.405 17:11:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:44.405 17:11:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:44.405 17:11:51 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:44.405 17:11:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:44.405 17:11:51 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:44.405 17:11:51 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:44.405 17:11:51 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:44.405 17:11:51 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:44.405 17:11:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:44.665 17:11:51 keyring_linux -- keyring/linux.sh@25 -- # sn=234744700 00:29:44.665 17:11:51 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:44.665 17:11:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:44.665 17:11:51 keyring_linux -- keyring/linux.sh@26 -- # [[ 234744700 == \2\3\4\7\4\4\7\0\0 ]] 00:29:44.665 17:11:51 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 234744700 00:29:44.665 17:11:51 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:44.665 17:11:51 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:44.665 Running I/O for 1 seconds... 00:29:46.042 00:29:46.042 Latency(us) 00:29:46.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.042 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:46.042 nvme0n1 : 1.01 15776.21 61.63 0.00 0.00 8080.29 2436.23 10941.66 00:29:46.042 =================================================================================================================== 00:29:46.042 Total : 15776.21 61.63 0.00 0.00 8080.29 2436.23 10941.66 00:29:46.042 0 00:29:46.042 17:11:52 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:46.042 17:11:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:46.042 17:11:52 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:46.042 17:11:52 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:46.042 17:11:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:46.042 17:11:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:46.042 17:11:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:46.042 17:11:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:46.042 17:11:52 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:46.042 17:11:52 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:46.042 17:11:52 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:46.043 17:11:52 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:46.043 17:11:52 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:29:46.043 17:11:52 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:46.043 17:11:52 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:46.043 17:11:52 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:46.043 17:11:52 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:46.043 17:11:52 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:46.043 17:11:52 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:46.043 17:11:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:46.302 [2024-07-15 17:11:52.850593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:46.302 [2024-07-15 17:11:52.850923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1195fd0 (107): Transport endpoint is not connected 00:29:46.302 [2024-07-15 17:11:52.851917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1195fd0 (9): Bad file descriptor 00:29:46.302 [2024-07-15 17:11:52.852919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:46.302 [2024-07-15 17:11:52.852927] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:46.302 [2024-07-15 17:11:52.852934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:46.302 request: 00:29:46.302 { 00:29:46.302 "name": "nvme0", 00:29:46.302 "trtype": "tcp", 00:29:46.302 "traddr": "127.0.0.1", 00:29:46.302 "adrfam": "ipv4", 00:29:46.302 "trsvcid": "4420", 00:29:46.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:46.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:46.302 "prchk_reftag": false, 00:29:46.302 "prchk_guard": false, 00:29:46.302 "hdgst": false, 00:29:46.302 "ddgst": false, 00:29:46.302 "psk": ":spdk-test:key1", 00:29:46.302 "method": "bdev_nvme_attach_controller", 00:29:46.302 "req_id": 1 00:29:46.302 } 00:29:46.302 Got JSON-RPC error response 00:29:46.302 response: 00:29:46.302 { 00:29:46.302 "code": -5, 00:29:46.302 "message": "Input/output error" 00:29:46.302 } 00:29:46.302 17:11:52 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:29:46.302 17:11:52 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:46.302 17:11:52 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:46.302 17:11:52 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:46.302 17:11:52 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:46.302 17:11:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:46.302 17:11:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:46.302 17:11:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:46.302 17:11:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:46.302 17:11:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:46.302 17:11:52 keyring_linux -- keyring/linux.sh@33 -- # sn=234744700 00:29:46.303 17:11:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 234744700 00:29:46.303 1 links removed 00:29:46.303 17:11:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:46.303 17:11:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:46.303 17:11:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:46.303 17:11:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:46.303 17:11:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:46.303 17:11:52 keyring_linux -- keyring/linux.sh@33 -- # sn=191035675 00:29:46.303 17:11:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 191035675 00:29:46.303 1 links removed 00:29:46.303 17:11:52 keyring_linux -- keyring/linux.sh@41 -- # killprocess 280815 00:29:46.303 17:11:52 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 280815 ']' 00:29:46.303 17:11:52 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 280815 00:29:46.303 17:11:52 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:46.303 17:11:52 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:46.303 17:11:52 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 280815 00:29:46.303 17:11:52 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:46.303 17:11:52 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:46.303 17:11:52 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 280815' 00:29:46.303 killing process with pid 280815 00:29:46.303 17:11:52 keyring_linux -- common/autotest_common.sh@967 -- # kill 280815 00:29:46.303 Received shutdown signal, test time was about 1.000000 seconds 00:29:46.303 00:29:46.303 Latency(us) 00:29:46.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.303 =================================================================================================================== 00:29:46.303 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:46.303 17:11:52 keyring_linux -- common/autotest_common.sh@972 -- # wait 280815 00:29:46.562 17:11:53 keyring_linux -- keyring/linux.sh@42 -- # killprocess 280590 00:29:46.562 17:11:53 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 280590 ']' 00:29:46.562 17:11:53 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 280590 00:29:46.562 17:11:53 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:46.562 17:11:53 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:46.562 17:11:53 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 280590 00:29:46.562 17:11:53 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:46.562 17:11:53 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:46.562 17:11:53 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 280590' 00:29:46.562 killing process with pid 280590 00:29:46.562 17:11:53 keyring_linux -- common/autotest_common.sh@967 -- # kill 280590 00:29:46.562 17:11:53 keyring_linux -- common/autotest_common.sh@972 -- # wait 280590 00:29:46.820 00:29:46.820 real 0m5.200s 00:29:46.820 user 0m9.146s 00:29:46.820 sys 0m1.478s 00:29:46.820 17:11:53 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:46.820 17:11:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:46.820 ************************************ 00:29:46.820 END TEST keyring_linux 00:29:46.820 ************************************ 00:29:47.079 17:11:53 -- common/autotest_common.sh@1142 -- # return 0 00:29:47.079 17:11:53 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:47.079 17:11:53 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:47.079 17:11:53 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:47.079 17:11:53 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:29:47.079 17:11:53 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:29:47.079 17:11:53 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:47.079 17:11:53 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:47.079 17:11:53 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:47.079 17:11:53 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:47.079 17:11:53 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:47.079 17:11:53 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:47.079 17:11:53 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:47.079 17:11:53 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:47.079 17:11:53 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:47.079 17:11:53 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:47.079 17:11:53 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:47.079 17:11:53 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:47.079 17:11:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:47.079 17:11:53 -- common/autotest_common.sh@10 -- # set +x 00:29:47.079 17:11:53 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:47.079 17:11:53 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:47.079 17:11:53 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:47.079 17:11:53 -- common/autotest_common.sh@10 -- # set +x 00:29:51.272 INFO: APP EXITING 00:29:51.272 INFO: killing all VMs 00:29:51.272 INFO: killing vhost app 00:29:51.272 INFO: EXIT DONE 00:29:53.807 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:29:53.807 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:29:53.807 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:29:53.807 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:29:53.807 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:29:53.807 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:29:53.807 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:29:53.807 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:29:53.807 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:29:53.808 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:29:53.808 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:29:53.808 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:29:53.808 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:29:53.808 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:29:53.808 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:29:53.808 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:29:53.808 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:29:56.340 Cleaning 00:29:56.340 Removing: /var/run/dpdk/spdk0/config 00:29:56.340 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:56.340 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:56.340 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:56.340 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:56.340 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:56.340 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:56.340 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:56.340 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:56.340 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:56.340 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:56.340 Removing: /var/run/dpdk/spdk1/config 00:29:56.340 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:56.340 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:56.340 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:56.340 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:56.340 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:56.340 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:56.340 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:56.340 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:56.340 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:56.340 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:56.340 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:56.340 Removing: /var/run/dpdk/spdk2/config 00:29:56.340 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:56.340 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:56.340 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:56.340 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:56.340 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:56.340 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:56.340 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:56.340 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:56.340 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:56.340 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:56.340 Removing: /var/run/dpdk/spdk3/config 00:29:56.340 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:56.340 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:56.340 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:56.340 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:56.340 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:56.340 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:56.340 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:56.340 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:56.340 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:56.340 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:56.340 Removing: /var/run/dpdk/spdk4/config 00:29:56.340 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:56.340 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:56.340 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:56.340 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:56.340 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:56.340 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:56.340 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:56.340 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:56.340 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:56.340 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:56.340 Removing: /dev/shm/bdev_svc_trace.1 00:29:56.340 Removing: /dev/shm/nvmf_trace.0 00:29:56.340 Removing: /dev/shm/spdk_tgt_trace.pid4089822 00:29:56.340 Removing: /var/run/dpdk/spdk0 00:29:56.340 Removing: /var/run/dpdk/spdk1 00:29:56.340 Removing: /var/run/dpdk/spdk2 00:29:56.340 Removing: /var/run/dpdk/spdk3 00:29:56.340 Removing: /var/run/dpdk/spdk4 00:29:56.340 Removing: /var/run/dpdk/spdk_pid10057 00:29:56.340 Removing: /var/run/dpdk/spdk_pid100646 00:29:56.340 Removing: /var/run/dpdk/spdk_pid105657 00:29:56.340 Removing: /var/run/dpdk/spdk_pid107256 00:29:56.340 Removing: /var/run/dpdk/spdk_pid109100 00:29:56.340 Removing: /var/run/dpdk/spdk_pid109320 00:29:56.340 Removing: /var/run/dpdk/spdk_pid109564 00:29:56.340 Removing: /var/run/dpdk/spdk_pid109803 00:29:56.340 Removing: /var/run/dpdk/spdk_pid110327 00:29:56.340 Removing: /var/run/dpdk/spdk_pid11101 00:29:56.340 Removing: /var/run/dpdk/spdk_pid112168 00:29:56.340 Removing: /var/run/dpdk/spdk_pid113155 00:29:56.340 Removing: /var/run/dpdk/spdk_pid113669 00:29:56.340 Removing: /var/run/dpdk/spdk_pid115968 00:29:56.340 Removing: /var/run/dpdk/spdk_pid116465 00:29:56.340 Removing: /var/run/dpdk/spdk_pid117193 00:29:56.340 Removing: /var/run/dpdk/spdk_pid121269 00:29:56.340 Removing: /var/run/dpdk/spdk_pid131341 00:29:56.340 Removing: /var/run/dpdk/spdk_pid135232 00:29:56.340 Removing: /var/run/dpdk/spdk_pid141438 00:29:56.340 Removing: /var/run/dpdk/spdk_pid142735 00:29:56.599 Removing: /var/run/dpdk/spdk_pid144404 00:29:56.599 Removing: /var/run/dpdk/spdk_pid1462 00:29:56.599 Removing: /var/run/dpdk/spdk_pid1466 00:29:56.599 Removing: /var/run/dpdk/spdk_pid149088 00:29:56.599 Removing: /var/run/dpdk/spdk_pid153104 00:29:56.599 Removing: /var/run/dpdk/spdk_pid160462 00:29:56.599 Removing: /var/run/dpdk/spdk_pid160464 00:29:56.599 Removing: /var/run/dpdk/spdk_pid165167 00:29:56.599 Removing: /var/run/dpdk/spdk_pid165332 00:29:56.599 Removing: /var/run/dpdk/spdk_pid165490 00:29:56.599 Removing: /var/run/dpdk/spdk_pid165859 00:29:56.599 Removing: /var/run/dpdk/spdk_pid165877 00:29:56.599 Removing: /var/run/dpdk/spdk_pid170338 00:29:56.599 Removing: /var/run/dpdk/spdk_pid170916 00:29:56.599 Removing: /var/run/dpdk/spdk_pid175238 00:29:56.599 Removing: /var/run/dpdk/spdk_pid177998 00:29:56.599 Removing: /var/run/dpdk/spdk_pid183385 00:29:56.599 Removing: /var/run/dpdk/spdk_pid188717 00:29:56.599 Removing: /var/run/dpdk/spdk_pid197790 00:29:56.599 Removing: /var/run/dpdk/spdk_pid19873 00:29:56.599 Removing: /var/run/dpdk/spdk_pid20139 00:29:56.599 Removing: /var/run/dpdk/spdk_pid204992 00:29:56.599 Removing: /var/run/dpdk/spdk_pid204994 00:29:56.599 Removing: /var/run/dpdk/spdk_pid223028 00:29:56.599 Removing: /var/run/dpdk/spdk_pid223680 00:29:56.599 Removing: /var/run/dpdk/spdk_pid224218 00:29:56.599 Removing: /var/run/dpdk/spdk_pid224907 00:29:56.599 Removing: /var/run/dpdk/spdk_pid225878 00:29:56.599 Removing: /var/run/dpdk/spdk_pid226570 00:29:56.599 Removing: /var/run/dpdk/spdk_pid227082 00:29:56.599 Removing: /var/run/dpdk/spdk_pid227754 00:29:56.599 Removing: /var/run/dpdk/spdk_pid232004 00:29:56.599 Removing: /var/run/dpdk/spdk_pid232231 00:29:56.599 Removing: /var/run/dpdk/spdk_pid238102 00:29:56.600 Removing: /var/run/dpdk/spdk_pid238355 00:29:56.600 Removing: /var/run/dpdk/spdk_pid240596 00:29:56.600 Removing: /var/run/dpdk/spdk_pid2406 00:29:56.600 Removing: /var/run/dpdk/spdk_pid24364 00:29:56.600 Removing: /var/run/dpdk/spdk_pid248686 00:29:56.600 Removing: /var/run/dpdk/spdk_pid248818 00:29:56.600 Removing: /var/run/dpdk/spdk_pid253699 00:29:56.600 Removing: /var/run/dpdk/spdk_pid255590 00:29:56.600 Removing: /var/run/dpdk/spdk_pid257565 00:29:56.600 Removing: /var/run/dpdk/spdk_pid258821 00:29:56.600 Removing: /var/run/dpdk/spdk_pid260794 00:29:56.600 Removing: /var/run/dpdk/spdk_pid261851 00:29:56.600 Removing: /var/run/dpdk/spdk_pid270585 00:29:56.600 Removing: /var/run/dpdk/spdk_pid271053 00:29:56.600 Removing: /var/run/dpdk/spdk_pid271541 00:29:56.600 Removing: /var/run/dpdk/spdk_pid273767 00:29:56.600 Removing: /var/run/dpdk/spdk_pid274233 00:29:56.600 Removing: /var/run/dpdk/spdk_pid274783 00:29:56.600 Removing: /var/run/dpdk/spdk_pid278510 00:29:56.600 Removing: /var/run/dpdk/spdk_pid278535 00:29:56.600 Removing: /var/run/dpdk/spdk_pid280059 00:29:56.600 Removing: /var/run/dpdk/spdk_pid280590 00:29:56.600 Removing: /var/run/dpdk/spdk_pid280815 00:29:56.600 Removing: /var/run/dpdk/spdk_pid30093 00:29:56.600 Removing: /var/run/dpdk/spdk_pid32816 00:29:56.600 Removing: /var/run/dpdk/spdk_pid3477 00:29:56.600 Removing: /var/run/dpdk/spdk_pid4087696 00:29:56.600 Removing: /var/run/dpdk/spdk_pid4088751 00:29:56.600 Removing: /var/run/dpdk/spdk_pid4089822 00:29:56.600 Removing: /var/run/dpdk/spdk_pid4090451 00:29:56.600 Removing: /var/run/dpdk/spdk_pid4091402 00:29:56.600 Removing: /var/run/dpdk/spdk_pid4091644 00:29:56.600 Removing: /var/run/dpdk/spdk_pid4092611 00:29:56.600 Removing: /var/run/dpdk/spdk_pid4092841 00:29:56.600 Removing: /var/run/dpdk/spdk_pid4093039 00:29:56.600 Removing: /var/run/dpdk/spdk_pid4094596 00:29:56.600 Removing: /var/run/dpdk/spdk_pid4095733 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4096012 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4096291 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4096600 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4096887 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4097137 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4097391 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4097665 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4098498 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4101445 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4101882 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4102146 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4102161 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4102652 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4102782 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4103155 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4103384 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4103643 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4103838 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4103936 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4104148 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4104701 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4104920 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4105229 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4105486 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4105535 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4105602 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4105848 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4106109 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4106361 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4106611 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4106872 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4107118 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4107372 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4107622 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4107881 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4108153 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4108439 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4108705 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4108966 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4109218 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4109476 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4109740 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4110002 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4110282 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4110547 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4110849 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4110949 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4111320 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4115423 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4158445 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4162690 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4173233 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4178626 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4182420 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4183095 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4189097 00:29:56.859 Removing: /var/run/dpdk/spdk_pid43142 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4402 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4875 00:29:56.859 Removing: /var/run/dpdk/spdk_pid4877 00:29:56.859 Removing: /var/run/dpdk/spdk_pid5127 00:29:56.859 Removing: /var/run/dpdk/spdk_pid52140 00:29:56.859 Removing: /var/run/dpdk/spdk_pid5337 00:29:56.859 Removing: /var/run/dpdk/spdk_pid5340 00:29:56.859 Removing: /var/run/dpdk/spdk_pid53751 00:29:56.859 Removing: /var/run/dpdk/spdk_pid54671 00:29:56.859 Removing: /var/run/dpdk/spdk_pid6259 00:29:56.859 Removing: /var/run/dpdk/spdk_pid7130 00:29:56.859 Removing: /var/run/dpdk/spdk_pid72005 00:29:57.118 Removing: /var/run/dpdk/spdk_pid75778 00:29:57.118 Removing: /var/run/dpdk/spdk_pid7912 00:29:57.118 Removing: /var/run/dpdk/spdk_pid8564 00:29:57.118 Removing: /var/run/dpdk/spdk_pid8572 00:29:57.118 Removing: /var/run/dpdk/spdk_pid8804 00:29:57.118 Clean 00:29:57.118 17:12:03 -- common/autotest_common.sh@1451 -- # return 0 00:29:57.118 17:12:03 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:29:57.118 17:12:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:57.118 17:12:03 -- common/autotest_common.sh@10 -- # set +x 00:29:57.118 17:12:03 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:29:57.118 17:12:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:57.118 17:12:03 -- common/autotest_common.sh@10 -- # set +x 00:29:57.118 17:12:03 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:57.118 17:12:03 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:57.118 17:12:03 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:57.118 17:12:03 -- spdk/autotest.sh@391 -- # hash lcov 00:29:57.118 17:12:03 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:57.118 17:12:03 -- spdk/autotest.sh@393 -- # hostname 00:29:57.118 17:12:03 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:57.378 geninfo: WARNING: invalid characters removed from testname! 00:30:19.305 17:12:24 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:20.241 17:12:26 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:22.178 17:12:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:24.083 17:12:30 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:25.985 17:12:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:27.363 17:12:33 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:29.269 17:12:35 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:29.269 17:12:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.269 17:12:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:29.269 17:12:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.269 17:12:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.269 17:12:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.269 17:12:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.269 17:12:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.269 17:12:35 -- paths/export.sh@5 -- $ export PATH 00:30:29.269 17:12:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.269 17:12:35 -- common/autobuild_common.sh@472 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:29.269 17:12:35 -- common/autobuild_common.sh@473 -- $ date +%s 00:30:29.269 17:12:35 -- common/autobuild_common.sh@473 -- $ mktemp -dt spdk_1721056355.XXXXXX 00:30:29.269 17:12:35 -- common/autobuild_common.sh@473 -- $ SPDK_WORKSPACE=/tmp/spdk_1721056355.9m1mfx 00:30:29.269 17:12:35 -- common/autobuild_common.sh@475 -- $ [[ -n '' ]] 00:30:29.269 17:12:35 -- common/autobuild_common.sh@479 -- $ '[' -n '' ']' 00:30:29.269 17:12:35 -- common/autobuild_common.sh@482 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:29.269 17:12:35 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:29.269 17:12:35 -- common/autobuild_common.sh@488 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:29.269 17:12:35 -- common/autobuild_common.sh@489 -- $ get_config_params 00:30:29.269 17:12:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:29.269 17:12:35 -- common/autotest_common.sh@10 -- $ set +x 00:30:29.269 17:12:35 -- common/autobuild_common.sh@489 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:29.269 17:12:35 -- common/autobuild_common.sh@491 -- $ start_monitor_resources 00:30:29.269 17:12:35 -- pm/common@17 -- $ local monitor 00:30:29.269 17:12:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:29.269 17:12:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:29.269 17:12:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:29.269 17:12:35 -- pm/common@21 -- $ date +%s 00:30:29.269 17:12:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:29.269 17:12:35 -- pm/common@21 -- $ date +%s 00:30:29.269 17:12:35 -- pm/common@25 -- $ sleep 1 00:30:29.269 17:12:35 -- pm/common@21 -- $ date +%s 00:30:29.269 17:12:35 -- pm/common@21 -- $ date +%s 00:30:29.269 17:12:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.release_build.sh.1721056355 00:30:29.269 17:12:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.release_build.sh.1721056355 00:30:29.269 17:12:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.release_build.sh.1721056355 00:30:29.269 17:12:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.release_build.sh.1721056355 00:30:29.269 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.release_build.sh.1721056355_collect-vmstat.pm.log 00:30:29.269 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.release_build.sh.1721056355_collect-cpu-load.pm.log 00:30:29.269 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.release_build.sh.1721056355_collect-cpu-temp.pm.log 00:30:29.269 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.release_build.sh.1721056355_collect-bmc-pm.bmc.pm.log 00:30:30.207 17:12:36 -- common/autobuild_common.sh@492 -- $ trap stop_monitor_resources EXIT 00:30:30.207 17:12:36 -- spdk/release_build.sh@10 -- $ [[ 0 -eq 1 ]] 00:30:30.207 17:12:36 -- spdk/release_build.sh@1 -- $ stop_monitor_resources 00:30:30.207 17:12:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:30.207 17:12:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:30.207 17:12:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:30.207 17:12:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:30.207 17:12:36 -- pm/common@44 -- $ pid=291293 00:30:30.207 17:12:36 -- pm/common@50 -- $ kill -TERM 291293 00:30:30.207 17:12:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:30.207 17:12:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:30.207 17:12:36 -- pm/common@44 -- $ pid=291294 00:30:30.207 17:12:36 -- pm/common@50 -- $ kill -TERM 291294 00:30:30.207 17:12:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:30.207 17:12:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:30.207 17:12:36 -- pm/common@44 -- $ pid=291296 00:30:30.207 17:12:36 -- pm/common@50 -- $ kill -TERM 291296 00:30:30.207 17:12:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:30.207 17:12:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:30.207 17:12:36 -- pm/common@44 -- $ pid=291319 00:30:30.207 17:12:36 -- pm/common@50 -- $ sudo -E kill -TERM 291319 00:30:30.467 + [[ -n 3984151 ]] 00:30:30.467 + sudo kill 3984151 00:30:30.478 [Pipeline] } 00:30:30.499 [Pipeline] // stage 00:30:30.505 [Pipeline] } 00:30:30.525 [Pipeline] // timeout 00:30:30.531 [Pipeline] } 00:30:30.551 [Pipeline] // catchError 00:30:30.557 [Pipeline] } 00:30:30.575 [Pipeline] // wrap 00:30:30.582 [Pipeline] } 00:30:30.599 [Pipeline] // catchError 00:30:30.611 [Pipeline] stage 00:30:30.613 [Pipeline] { (Epilogue) 00:30:30.630 [Pipeline] catchError 00:30:30.632 [Pipeline] { 00:30:30.648 [Pipeline] echo 00:30:30.650 Cleanup processes 00:30:30.658 [Pipeline] sh 00:30:30.945 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:30.945 291414 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:30.945 291691 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:30.961 [Pipeline] sh 00:30:31.246 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:31.246 ++ grep -v 'sudo pgrep' 00:30:31.246 ++ awk '{print $1}' 00:30:31.246 + sudo kill -9 291414 00:30:31.259 [Pipeline] sh 00:30:31.543 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:41.527 [Pipeline] sh 00:30:41.815 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:41.815 Artifacts sizes are good 00:30:41.829 [Pipeline] archiveArtifacts 00:30:41.836 Archiving artifacts 00:30:41.998 [Pipeline] sh 00:30:42.311 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:42.326 [Pipeline] cleanWs 00:30:42.335 [WS-CLEANUP] Deleting project workspace... 00:30:42.335 [WS-CLEANUP] Deferred wipeout is used... 00:30:42.342 [WS-CLEANUP] done 00:30:42.344 [Pipeline] } 00:30:42.365 [Pipeline] // catchError 00:30:42.376 [Pipeline] sh 00:30:42.657 + logger -p user.info -t JENKINS-CI 00:30:42.667 [Pipeline] } 00:30:42.684 [Pipeline] // stage 00:30:42.690 [Pipeline] } 00:30:42.708 [Pipeline] // node 00:30:42.715 [Pipeline] End of Pipeline 00:30:42.766 Finished: SUCCESS